50,817 research outputs found

    Looking at Vector Space and Language Models for IR using Density Matrices

    Full text link
    In this work, we conduct a joint analysis of both Vector Space and Language Models for IR using the mathematical framework of Quantum Theory. We shed light on how both models allocate the space of density matrices. A density matrix is shown to be a general representational tool capable of leveraging capabilities of both VSM and LM representations thus paving the way for a new generation of retrieval models. We analyze the possible implications suggested by our findings.Comment: In Proceedings of Quantum Interaction 201

    Multicriteria mapping manual: version 1.0

    Get PDF
    This Manual offers basic advice on how to do multicriteria mapping (MCM). It suggests how to: go about designing and building a typical MCM project; engage with participants and analyse results ā€“ and get the most out of the online MCM tool. Key terms are shown in bold italics and defined and explained in a final Annex. The online MCM software tool provides its own operational help. So this Manual is more focused on the general approach. There are no rigid rules. MCM is structured, but very flexible. It allows many more detailed features than can be covered here. MCM users are encouraged to think for themselves and be responsible and creative

    Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps

    Full text link
    Concept maps can be used to concisely represent important information and bring structure into large document collections. Therefore, we study a variant of multi-document summarization that produces summaries in the form of concept maps. However, suitable evaluation datasets for this task are currently missing. To close this gap, we present a newly created corpus of concept maps that summarize heterogeneous collections of web documents on educational topics. It was created using a novel crowdsourcing approach that allows us to efficiently determine important elements in large document collections. We release the corpus along with a baseline system and proposed evaluation protocol to enable further research on this variant of summarization.Comment: Published at EMNLP 201

    On the Accuracy of Group Credences

    Get PDF
    to appear in SzabĆ³ Gendler, T. & J. Hawthorne (eds.) Oxford Studies in Epistemology volume 6 We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And how do they relate to the individual credences assigned by the members of the particular group in question? According to the credal judgment aggregation principle, Linear Pooling, the credence function of a group should be a weighted average or linear pool of the credence functions of the individuals in the group. In this paper, I give an argument for Linear Pooling based on considerations of accuracy. And I respond to two standard objections to the aggregation principle

    Year 1 phonics screening check consultation

    Get PDF
    "The Government is committed to raising children's achievement in reading, and has expressed the intention to establish a phonics screening check for children in Year 1. This will be a short, light-touch screening check designed to confirm that children have grasped the basics of phonic decoding and to identify those pupils who need extra help at an early stage, so that schools can provide support. The results of the screening check will provide valuable information to parents. The screening check will be part of the arrangements for the statutory assessment of children in respect of the first Key Stage. This consultation seeks views on proposals around the purpose, structure and administration of the screening check" -- front cover

    Thematic Annotation: extracting concepts out of documents

    Get PDF
    Contrarily to standard approaches to topic annotation, the technique used in this work does not centrally rely on some sort of -- possibly statistical -- keyword extraction. In fact, the proposed annotation algorithm uses a large scale semantic database -- the EDR Electronic Dictionary -- that provides a concept hierarchy based on hyponym and hypernym relations. This concept hierarchy is used to generate a synthetic representation of the document by aggregating the words present in topically homogeneous document segments into a set of concepts best preserving the document's content. This new extraction technique uses an unexplored approach to topic selection. Instead of using semantic similarity measures based on a semantic resource, the later is processed to extract the part of the conceptual hierarchy relevant to the document content. Then this conceptual hierarchy is searched to extract the most relevant set of concepts to represent the topics discussed in the document. Notice that this algorithm is able to extract generic concepts that are not directly present in the document.Comment: Technical report EPFL/LIA. 81 pages, 16 figure

    An Evaluation of the Sustainability of Global Tuna Stocks Relative to Marine Stewardship Council Criteria

    Get PDF
    The Marine Stewardship Council (MSC) has established a program whereby a fishery may be certified as being sustainable. The sustainability of a fishery is defined by MSC criteria which are embodied in three Principles: relating to the status of the stock, the ecosystem of which the stock is a member and the fishery management system. Since many of these MSC criteria are comparable for global tuna stocks, the MSC scoring system was used to evaluate nineteen stocks of tropical and temperate tunas throughout the world and to evaluate the management systems of the Regional Fishery Management Organizations (RFMO) associated with these stocks

    The Effectiveness of Concept Based Search for Video Retrieval

    Get PDF
    In this paper we investigate how a small number of high-level concepts\ud derived for video shots, such as Sport. Face.Indoor. etc., can be used effectively for ad hoc search in video material. We will answer the following questions: 1) Can we automatically construct concept queries from ordinary text queries? 2) What is the best way to combine evidence from single concept detectors into final search results? We evaluated algorithms for automatic concept query formulation using WordNet based concept extraction, and we evaluated algorithms for fast, on-line combination of concepts. Experimental results on data from the TREC Video 2005 workshop and 25 test users show the following. 1) Automatic query formulation through WordNet based concept extraction can achieve comparable results to user created query concepts and 2) Combination methods that take neighboring shots into account outperform more simple combination methods
    • ā€¦
    corecore