5,875 research outputs found

    Reasoning & Querying – State of the Art

    Get PDF
    Various query languages for Web and Semantic Web data, both for practical use and as an area of research in the scientific community, have emerged in recent years. At the same time, the broad adoption of the internet where keyword search is used in many applications, e.g. search engines, has familiarized casual users with using keyword queries to retrieve information on the internet. Unlike this easy-to-use querying, traditional query languages require knowledge of the language itself as well as of the data to be queried. Keyword-based query languages for XML and RDF bridge the gap between the two, aiming at enabling simple querying of semi-structured data, which is relevant e.g. in the context of the emerging Semantic Web. This article presents an overview of the field of keyword querying for XML and RDF

    Identification of Design Principles

    Get PDF
    This report identifies those design principles for a (possibly new) query and transformation language for the Web supporting inference that are considered essential. Based upon these design principles an initial strawman is selected. Scenarios for querying the Semantic Web illustrate the design principles and their reflection in the initial strawman, i.e., a first draft of the query language to be designed and implemented by the REWERSE working group I4

    Ranking structured documents using utility theory in the Bayesian network retrieval model

    Get PDF
    In this paper a new method based on Utility and Decision theory is presented to deal with structured documents. The aim of the application of these methodologies is to refine a first ranking of structural units, generated by means of an Information Retrieval Model based on Bayesian Networks. Units are newly arranged in the new ranking by combining their posterior probabilities, obtained in the first stage, with the expected utility of retrieving them. The experimental work has been developed using the Shakespeare structured collection and the results show an improvement of the effectiveness of this new approach

    The accessibility dimension for structured document retrieval

    Get PDF
    Structured document retrieval aims at retrieving the document components that best satisfy a query, instead of merely retrieving pre-defined document units. This paper reports on an investigation of a tf-idf-acc approach, where tf and idf are the classical term frequency and inverse document frequency, and acc, a new parameter called accessibility, that captures the structure of documents. The tf-idf-acc approach is defined using a probabilistic relational algebra. To investigate the retrieval quality and estimate the acc values, we developed a method that automatically constructs diverse test collections of structured documents from a standard test collection, with which experiments were carried out. The analysis of the experiments provides estimates of the acc values

    A web services based framework for efficient monitoring and event reporting.

    Get PDF
    Network and Service Management (NSM) is a research discipline with significant research contributions the last 25 years. Despite the numerous standardised solutions that have been proposed for NSM, the quest for an "all encompassing technology" still continues. A new technology introduced lately to address NSM problems is Web Services (WS). Despite the research effort put into WS and their potential for addressing NSM objectives, there are efficiency, interoperability, etc issues that need to be solved before using WS for NSM. This thesis looks at two techniques to increase the efficiency of WS management applications so that the latter can be used for efficient monitoring and event reporting. The first is a query tool we built that can be used for efficient retrieval of management state data close to the devices where they are hosted. The second technique is policies used to delegate a number of tasks from a manager to an agent to make WS-based event reporting systems more efficient. We tested the performance of these mechanisms by incorporating them in a custom monitoring and event reporting framework and supporting systems we have built, against other similar mechanisms (XPath) that have been proposed for the same tasks, as well as previous technologies such as SNMP. Through these tests we have shown that these mechanisms are capable of allowing us to use WS efficiently in various monitoring and event reporting scenarios. Having shown the potential of our techniques we also present the design and implementation challenges for building a GUI tool to support and enhance the above systems with extra capabilities. In summary, we expect that other problems WS face will be solved in the near future, making WS a capable platform for it to be used for NSM

    POLIS: a probabilistic summarisation logic for structured documents

    Get PDF
    PhDAs the availability of structured documents, formatted in markup languages such as SGML, RDF, or XML, increases, retrieval systems increasingly focus on the retrieval of document-elements, rather than entire documents. Additionally, abstraction layers in the form of formalised retrieval logics have allowed developers to include search facilities into numerous applications, without the need of having detailed knowledge of retrieval models. Although automatic document summarisation has been recognised as a useful tool for reducing the workload of information system users, very few such abstraction layers have been developed for the task of automatic document summarisation. This thesis describes the development of an abstraction logic for summarisation, called POLIS, which provides users (such as developers or knowledge engineers) with a high-level access to summarisation facilities. Furthermore, POLIS allows users to exploit the hierarchical information provided by structured documents. The development of POLIS is carried out in a step-by-step way. We start by defining a series of probabilistic summarisation models, which provide weights to document-elements at a user selected level. These summarisation models are those accessible through POLIS. The formal definition of POLIS is performed in three steps. We start by providing a syntax for POLIS, through which users/knowledge engineers interact with the logic. This is followed by a definition of the logics semantics. Finally, we provide details of an implementation of POLIS. The final chapters of this dissertation are concerned with the evaluation of POLIS, which is conducted in two stages. Firstly, we evaluate the performance of the summarisation models by applying POLIS to two test collections, the DUC AQUAINT corpus, and the INEX IEEE corpus. This is followed by application scenarios for POLIS, in which we discuss how POLIS can be used in specific IR tasks

    A Generic Approach and Framework for Managing Complex Information

    Get PDF
    Several application domains, such as healthcare, incorporate domain knowledge into their day-to-day activities to standardise and enhance their performance. Such incorporation produces complex information, which contains two main clusters (active and passive) of information that have internal connections between them. The active cluster determines the recommended procedure that should be taken as a reaction to specific situations. The passive cluster determines the information that describes these situations and other descriptive information plus the execution history of the complex information. In the healthcare domain, a medical patient plan is an example for complex information produced during the disease management activity from specific clinical guidelines. This thesis investigates the complex information management at an application domain level in order to support the day-to-day organization activities. In this thesis, a unified generic approach and framework, called SIM (Specification, Instantiation and Maintenance), have been developed for computerising the complex information management. The SIM approach aims at providing a conceptual model for the complex information at different abstraction levels (generic and entity-specific). In the SIM approach, the complex information at the generic level is referred to as a skeletal plan from which several entity-specific plans are generated. The SIM framework provides comprehensive management aspects for managing the complex information. In the SIM framework, the complex information goes through three phases, specifying the skeletal plans, instantiating entity-specific plans, and then maintaining these entity-specific plans during their lifespan. In this thesis, a language, called AIM (Advanced Information Management), has been developed to support the main functionalities of the SIM approach and framework. AIM consists of three components: AIMSL, AIM ESPDoc model, and AIMQL. The AIMSL is the AIM specification component that supports the formalisation process of the complex information at a generic level (skeletal plans). The AIM ESPDoc model is a computer-interpretable model for the entity-specific plan. AIMQL is the AIM query component that provides support for manipulating and querying the complex information, and provides special manipulation operations and query capabilities, such as replay query support. The applicability of the SIM approach and framework is demonstrated through developing a proof-of-concept system, called AIMS, using the available technologies, such as XML and DBMS. The thesis evaluates the the AIMS system using a clinical case study, which has applied to a medical test request application
    corecore