15,284 research outputs found

    Personalization by Partial Evaluation.

    Get PDF
    The central contribution of this paper is to model personalization by the programmatic notion of partial evaluation.Partial evaluation is a technique used to automatically specialize programs, given incomplete information about their input.The methodology presented here models a collection of information resources as a program (which abstracts the underlying schema of organization and flow of information),partially evaluates the program with respect to user input,and recreates a personalized site from the specialized program.This enables a customizable methodology called PIPE that supports the automatic specialization of resources,without enumerating the interaction sequences beforehand .Issues relating to the scalability of PIPE,information integration,sessioniz-ling scenarios,and case studies are presented

    Investigation into Indexing XML Data Techniques

    Get PDF
    The rapid development of XML technology improves the WWW, since the XML data has many advantages and has become a common technology for transferring data cross the internet. Therefore, the objective of this research is to investigate and study the XML indexing techniques in terms of their structures. The main goal of this investigation is to identify the main limitations of these techniques and any other open issues. Furthermore, this research considers most common XML indexing techniques and performs a comparison between them. Subsequently, this work makes an argument to find out these limitations. To conclude, the main problem of all the XML indexing techniques is the trade-off between the size and the efficiency of the indexes. So, all the indexes become large in order to perform well, and none of them is suitable for all users’ requirements. However, each one of these techniques has some advantages in somehow

    Towards a query language for annotation graphs

    Get PDF
    The multidimensional, heterogeneous, and temporal nature of speech databases raises interesting challenges for representation and query. Recently, annotation graphs have been proposed as a general-purpose representational framework for speech databases. Typical queries on annotation graphs require path expressions similar to those used in semistructured query languages. However, the underlying model is rather different from the customary graph models for semistructured data: the graph is acyclic and unrooted, and both temporal and inclusion relationships are important. We develop a query language and describe optimization techniques for an underlying relational representation.Comment: 8 pages, 10 figure

    A model for querying semistructured data through the exploitation of regular sub-structures

    Get PDF
    Much research has been undertaken in order to speed up the processing of semistructured data in general and XML in particular. Many approaches for storage, compression, indexing and querying exist, e.g. [1, 2]. We do not present yet another such algorithm but a unifying model in which these algorithm can be understood. The key idea behind this research is the assumption, that most practical queries are based on a particular pattern of data that can be deduced from the query and which can then be captured using a regular structure amendable to efficient processing techniques

    A contribution to the Semantics of Xcerpt, a Web Query and Transformation Language

    Get PDF
    Xcerpt [1] is a declarative and pattern-based query and transformation languag

    EquiX---A Search and Query Language for XML

    Full text link
    EquiX is a search language for XML that combines the power of querying with the simplicity of searching. Requirements for such languages are discussed and it is shown that EquiX meets the necessary criteria. Both a graphical abstract syntax and a formal concrete syntax are presented for EquiX queries. In addition, the semantics is defined and an evaluation algorithm is presented. The evaluation algorithm is polynomial under combined complexity. EquiX combines pattern matching, quantification and logical expressions to query both the data and meta-data of XML documents. The result of a query in EquiX is a set of XML documents. A DTD describing the result documents is derived automatically from the query.Comment: technical report of Hebrew University Jerusalem Israe

    A research protocol for developing a Point-Of-Care Key Evidence Tool 'POCKET': a checklist for multidimensional evidence reporting on point-of-care in vitro diagnostics.

    Get PDF
    INTRODUCTION: Point-of-care in vitro diagnostics (POC-IVD) are increasingly becoming widespread as an acceptable means of providing rapid diagnostic results to facilitate decision-making in many clinical pathways. Evidence in utility, usability and cost-effectiveness is currently provided in a fragmented and detached manner that is fraught with methodological challenges given the disruptive nature these tests have on the clinical pathway. The Point-of-care Key Evidence Tool (POCKET) checklist aims to provide an integrated evidence-based framework that incorporates all required evidence to guide the evaluation of POC-IVD to meet the needs of policy and decisionmakers in the National Health Service (NHS). METHODS AND ANALYSIS: A multimethod approach will be applied in order to develop the POCKET. A thorough literature review has formed the basis of a robust Delphi process and validation study. Semistructured interviews are being undertaken with POC-IVD stakeholders, including industry, regulators, commissioners, clinicians and patients to understand what evidence is required to facilitate decision-making. Emergent themes will be translated into a series of statements to form a survey questionnaire that aims to reach a consensus in each stakeholder group to what needs to be included in the tool. Results will be presented to a workshop to discuss the statements brought forward and the optimal format for the tool. Once assembled, the tool will be field-tested through case studies to ensure validity and usability and inform refinement, if required. The final version will be published online with a call for comments. Limitations include unpredictable sample representation, development of compromise position rather than consensus, and absence of blinding in validation exercise. ETHICS AND DISSEMINATION: The Imperial College Joint Research Compliance Office and the Imperial College Hospitals NHS Trust R&D department have approved the protocol. The checklist tool will be disseminated through a PhD thesis, a website, peer-reviewed publication, academic conferences and formal presentations

    The conceptual and practical ethical dilemmas of using health discussion board posts as research data.

    Get PDF
    Increasing numbers of people living with a long-term health condition are putting personal health information online, including on discussion boards. Many discussion boards contain material of potential use to researchers; however, it is unclear how this information can and should be used by researchers. To date there has been no evaluation of the views of those individuals sharing health information online regarding the use of their shared information for research purposes

    The Partial Evaluation Approach to Information Personalization

    Get PDF
    Information personalization refers to the automatic adjustment of information content, structure, and presentation tailored to an individual user. By reducing information overload and customizing information access, personalization systems have emerged as an important segment of the Internet economy. This paper presents a systematic modeling methodology - PIPE (`Personalization is Partial Evaluation') - for personalization. Personalization systems are designed and implemented in PIPE by modeling an information-seeking interaction in a programmatic representation. The representation supports the description of information-seeking activities as partial information and their subsequent realization by partial evaluation, a technique for specializing programs. We describe the modeling methodology at a conceptual level and outline representational choices. We present two application case studies that use PIPE for personalizing web sites and describe how PIPE suggests a novel evaluation criterion for information system designs. Finally, we mention several fundamental implications of adopting the PIPE model for personalization and when it is (and is not) applicable.Comment: Comprehensive overview of the PIPE model for personalizatio
    corecore