1,020 research outputs found

    A Study of Snippet Length and Informativeness: Behaviour, Performance and User Experience

    Get PDF
    The design and presentation of a Search Engine Results Page (SERP) has been subject to much research. With many contemporary aspects of the SERP now under scrutiny, work still remains in investigating more traditional SERP components, such as the result summary. Prior studies have examined a variety of different aspects of result summaries, but in this paper we investigate the influence of result summary length on search behaviour, performance and user experience. To this end, we designed and conducted a within-subjects experiment using the TREC AQUAINT news collection with 53 participants. Using Kullback-Leibler distance as a measure of information gain, we examined result summaries of different lengths and selected four conditions where the change in information gain was the greatest: (i) title only; (ii) title plus one snippet; (iii) title plus two snippets; and (iv) title plus four snippets. Findings show that participants broadly preferred longer result summaries, as they were perceived to be more informative. However, their performance in terms of correctly identifying relevant documents was similar across all four conditions. Furthermore, while the participants felt that longer summaries were more informative, empirical observations suggest otherwise; while participants were more likely to click on relevant items given longer summaries, they also were more likely to click on non-relevant items. This shows that longer is not necessarily better, though participants perceived that to be the case - and second, they reveal a positive relationship between the length and informativeness of summaries and their attractiveness (i.e. clickthrough rates). These findings show that there are tensions between perception and performance when designing result summaries that need to be taken into account

    Transfer Learning for Multi-language Twitter Election Classification

    Get PDF
    Both politicians and citizens are increasingly embracing social media as a means to disseminate information and comment on various topics, particularly during significant political events, such as elections. Such commentary during elections is also of interest to social scientists and pollsters. To facilitate the study of social media during elections, there is a need to automatically identify posts that are topically related to those elections. However, current studies have focused on elections within English-speaking regions, and hence the resultant election content classifiers are only applicable for elections in countries where the predominant language is English. On the other hand, as social media is becoming more prevalent worldwide, there is an increasing need for election classifiers that can be generalised across different languages, without building a training dataset for each election. In this paper, based upon transfer learning, we study the development of effective and reusable election classifiers for use on social media across multiple languages. We combine transfer learning with different classifiers such as Support Vector Machines (SVM) and state-of-the-art Convolutional Neural Networks (CNN), which make use of word embedding representations for each social media post. We generalise the learned classifier models for cross-language classification by using a linear translation approach to map the word embedding vectors from one language into another. Experiments conducted over two election datasets in different languages show that without using any training data from the target language, linear translations outperform a classical transfer learning approach, namely Transfer Component Analysis (TCA), by 80% in recall and 25% in F1 measure

    Topical retinoic acid changes the epidermal cell surface glycosylation pattern towards that of a mucosal epithelium

    Full text link
    Topical all-trims retinoic acid (RA) produces a number of epidermal changes which are indistinguishable from those observed following treatment with a local irritant, namely sodium lauryl sulphate (SI. S). This observation has led to criticism that the efficacy of RA in disorders such as photoageing. Is merely a result of irritancy. In stratified epithelia, the cellular differentiation process is characterized by a stepwise synthesis of cell surface carbohydrates, and each type of stratified epithelium has its own specific pattern of carbohydrate expression. Glycosyltransferases, which are responsible for carbohydrate synthesis, are influenced by retinoids. Thus, we investigated whether epidermal cell surface glycosylation is altered in skin treated with topical RA, and contrasted it with changes induced by topical SLS Skin biopsies were obtained from seven normal volunteers who had been treated, on three separate areas of buttock skin, with single applications of 0 1 RA. 2 SLS, or vehicle creams, followed by 4-day occlusion. Biopsies were assessed immunohistologically using highly specific monoclonal antibodies to cell surface carbohydrates (types 1, 2 and 3 chain structures), previously demonstrated in the epidermis and in oral mucosal epithelium. Although type 1 chain structures were not demonstrated in any of the samples, the distribution of type 2 and 3 chain structures in RA-treated epidermis was altered towards that seen in a mucosal epithelium. T antigen, a mucin-type cell surface carbohydrate structure normally expressed throughout the epidermis, was only observed in the granular layer of RA-treated epidermis-a feature of mucosal epithelia. Le y , normally only seen in non-keratinized buccal epithelium, was strongly expressed in RA-treated epidermis. In contrast, the glycosylation pattern of the SLS-treated epidermis was not significantly different from that observed after vehicle treatment. Thus, RA treatment converts normal stratified epithelium towards the phenotype of mucosal epithelium with a decrease in T antigen and a concomitant increase in Le y . These changes the not observed following treatment with SLS and identify an important difference between RA effects and irritancy.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/72220/1/j.1365-2133.1996.27762.x.pd

    An integrated approach to rotorcraft human factors research

    Get PDF
    As the potential of civil and military helicopters has increased, more complex and demanding missions in increasingly hostile environments have been required. Users, designers, and manufacturers have an urgent need for information about human behavior and function to create systems that take advantage of human capabilities, without overloading them. Because there is a large gap between what is known about human behavior and the information needed to predict pilot workload and performance in the complex missions projected for pilots of advanced helicopters, Army and NASA scientists are actively engaged in Human Factors Research at Ames. The research ranges from laboratory experiments to computational modeling, simulation evaluation, and inflight testing. Information obtained in highly controlled but simpler environments generates predictions which can be tested in more realistic situations. These results are used, in turn, to refine theoretical models, provide the focus for subsequent research, and ensure operational relevance, while maintaining predictive advantages. The advantages and disadvantages of each type of research are described along with examples of experimental results

    An analysis of query difficulty for information retrieval in the medical domain

    Get PDF
    We present a post-hoc analysis of a benchmarking activity for information retrieval (IR) in the medical domain to determine if performance for queries with different levels of complexity can be associated with different IR methods or techniques. Our analysis is based on data and runs for Task 3 of the CLEF 2013 eHealth lab, which provided patient queries and a large medical document collection for patient centred medical information retrieval technique development. We categorise the queries based on their complexity, which is defined as the number of medical concepts they contain. We then show how query complexity affects performance of runs submitted to the lab, and provide suggestions for improving retrieval quality for this complex retrieval task and similar IR evaluation tasks

    Unbiased Comparative Evaluation of Ranking Functions

    Full text link
    Eliciting relevance judgments for ranking evaluation is labor-intensive and costly, motivating careful selection of which documents to judge. Unlike traditional approaches that make this selection deterministically, probabilistic sampling has shown intriguing promise since it enables the design of estimators that are provably unbiased even when reusing data with missing judgments. In this paper, we first unify and extend these sampling approaches by viewing the evaluation problem as a Monte Carlo estimation task that applies to a large number of common IR metrics. Drawing on the theoretical clarity that this view offers, we tackle three practical evaluation scenarios: comparing two systems, comparing kk systems against a baseline, and ranking kk systems. For each scenario, we derive an estimator and a variance-optimizing sampling distribution while retaining the strengths of sampling-based evaluation, including unbiasedness, reusability despite missing data, and ease of use in practice. In addition to the theoretical contribution, we empirically evaluate our methods against previously used sampling heuristics and find that they generally cut the number of required relevance judgments at least in half.Comment: Under review; 10 page

    Evaluating Variable-Length Multiple-Option Lists in Chatbots and Mobile Search

    Full text link
    In recent years, the proliferation of smart mobile devices has lead to the gradual integration of search functionality within mobile platforms. This has created an incentive to move away from the "ten blue links'' metaphor, as mobile users are less likely to click on them, expecting to get the answer directly from the snippets. In turn, this has revived the interest in Question Answering. Then, along came chatbots, conversational systems, and messaging platforms, where the user needs could be better served with the system asking follow-up questions in order to better understand the user's intent. While typically a user would expect a single response at any utterance, a system could also return multiple options for the user to select from, based on different system understandings of the user's intent. However, this possibility should not be overused, as this practice could confuse and/or annoy the user. How to produce good variable-length lists, given the conflicting objectives of staying short while maximizing the likelihood of having a correct answer included in the list, is an underexplored problem. It is also unclear how to evaluate a system that tries to do that. Here we aim to bridge this gap. In particular, we define some necessary and some optional properties that an evaluation measure fit for this purpose should have. We further show that existing evaluation measures from the IR tradition are not entirely suitable for this setup, and we propose novel evaluation measures that address it satisfactorily.Comment: 4 pages, in Proceeding of SIGIR 201

    Modelling Relevance towards Multiple Inclusion Criteria when Ranking Patients

    Get PDF
    In the medical domain, information retrieval systems can be used for identifying cohorts (i.e. patients) required for clinical studies. However, a challenge faced by such search systems is to retrieve the cohorts whose medical histories cover the inclusion criteria specified in a query, which are often complex and include multiple medical conditions. For example, a query may aim to find patients with both 'lupus nephritis' and 'thrombotic thrombocytopenic purpura'. In a typical best-match retrieval setting, any patient exhibiting all of the inclusion criteria should naturally be ranked higher than a patient that only exhibits a subset, or none, of the criteria. In this work, we extend the two main existing models for ranking patients to take into account the coverage of the inclusion criteria by adapting techniques from recent research into coverage-based diversification. We propose a novel approach for modelling the coverage of the query inclusion criteria within the records of a particular patient, and thereby rank highly those patients whose medical records are likely to cover all of the specified criteria. In particular, our proposed approach estimates the relevance of a patient, based on the mixture of the probability that the patient is retrieved by a patient ranking model for a given query, and the likelihood that the patient's records cover the query criteria. The latter is measured using the relevance towards each of the criteria stated in the query, represented in the form of sub-queries. We thoroughly evaluate our proposed approach using the test collection provided by the TREC 2011 and 2012 Medical Records track. Our results show significant improvements over existing strong baselines
    corecore