10 research outputs found

    Measures to Evaluate the Superiority of a Search Engine

    Get PDF
    Main objective of a search engine is to return relevant results according to user query in less time. Evaluation metrics are used to measure the superiority of a search engine in terms of quality. This is a review paper presenting a summary of different metrics used for evaluation of a search engine in terms of effectiveness, efficiency and relevancy

    Predicting re-finding activity and difficulty

    Get PDF
    In this study, we address the problem of identifying if users are attempting to re-find information and estimating the level of difficulty of the re- finding task. We propose to consider the task information (e.g. multiple queries and click information) rather than only queries. Our resultant prediction models are shown to be significantly more accurate (by 2%) than the current state of the art. While past research assumes that previous search history of the user is available to the prediction model, we examine if re-finding detection is possible without access to this information. Our evaluation indicates that such detection is possible, but more challenging. We further describe the first predictive model in detecting re-finding difficulty, showing it to be significantly better than existing approaches for detecting general search difficulty

    A task level metric for measuring web search satisfaction and its application on improving relevance estimation

    No full text

    Automatic Online Evaluation of Intelligent Assistants

    Get PDF
    ABSTRACT Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is challenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This paper is the first attempt to solve this challenge. We develop consistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and intent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants according to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Results show our approach can accurately identify satisfactory and unsatisfactory sessions

    Automatic Online Evaluation of Intelligent Assistants

    Full text link
    Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is chal-lenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This pa-per is the first attempt to solve this challenge. We develop con-sistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and in-tent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants according to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Re-sults show our approach can accurately identify satisfactory and un-satisfactory sessions

    0 Search and Breast Cancer: On Episodic Shifts of Attention over Life Histories of an Illness

    Get PDF
    We seek to understand the evolving needs of people who are faced with a life-changing medical diagnosis based on analyses of queries extracted from an anonymized search query log. Focusing on breast cancer, we manually tag a set of Web searchers as showing patterns of search behavior consistent with someone grappling with the screening, diagnosis, and treatment of breast cancer. We build and apply probabilistic classifiers to detect these searchers from multiple sessions and to identify the timing of diagnosis using temporal and statistical features. We explore the changes in information-seeking over time before and after an inferred diagnosis of breast cancer by aligning multiple searchers by the estimated time of diagnosis. We employ the classifier to automatically identify 1700 candidate searchers with an estimated 90% precision, and we predict the day of diagnosis within 15 days with an 88% accuracy. We show that the geographic and demographic attributes of searchers identified with high probability are strongly correlated with ground truth of reported incidence rates. We then analyze the content of queries over time for inferred cancer patients, using a detailed ontology of cancer-related search terms. The analysis reveals the rich temporal structure of the evolving queries of people likely diagnosed with breast cancer. Finally, we focus on subtypes of illness based on inferred stages of cancer and show clinically relevant dynamics of information seeking based on the dominant stage expressed by searchers

    Identification of re-finding tasks and search difficulty

    Get PDF
    We address the problem of identifying if users are attempting to re-find information and estimating the level of difficulty of the re-finding task. Identifying re-finding tasks and detecting search difficulties will enable search engines to respond dynamically to the search task being undertaken. To this aim, we conduct user studies and query log analysis to make a better understanding of re-finding tasks and search difficulties. Computing features particularly gathered in our user studies, we generate training sets from query log data, which is used for constructing automatic identification (prediction) models. Using machine learning techniques, our built re-finding identification model, which is the first model at the task level, could significantly outperform the existing query-based identifications. While past research assumes that previous search history of the user is available to the prediction model, we examine if re-finding detection is possible without access to this information. Our evaluation indicates that such detection is possible, but more challenging. We further describe the first predictive model in detecting re-finding difficulty, showing it to be significantly better than existing approaches for detecting general search difficulty. We also analyze important features for both identifications of re-finding and difficulties. Next, we investigate detailed identification of re-finding tasks and difficulties in terms of the type of the vertical document to be re-found. The accuracy of constructed predictive models indicates that re-finding tasks are indeed distinguishable across verticals and in comparison to general search tasks. This illustrates the requirement of adapting existing general search techniques for the re-finding context in terms of presenting vertical-specific results. Despite the overall reduction of accuracy in predictions independent of the original search of the user, it appears that identifying “image re-finding” is less dependent on such past information. Investigating the real-time prediction effectiveness of the models show that predicting ``image'' document re-finding obtains the highest accuracy early in the search. Early predictions would benefit search engines with adaptation of search results during re-finding activities. Furthermore, we study the difficulties in re-finding across verticals given some of the established indications of difficulties in the general web search context. In terms of user effort, re-finding “image” vertical appears to take more effort in terms of number of queries and clicks than other investigated verticals, while re-finding “reference” documents seems to be more time consuming when there is a longer time gap between the re-finding and corresponding original search. Exploring other features suggests that there could be particular difficulty indications for the re-finding context and specific to each vertical. To sum up, this research investigates the issue of effectively supporting users with re-finding search tasks. To this end, we have identified features that allow for more accurate distinction between re-finding and general tasks. This will enable search engines to better adapt search results for the re-finding context and improve the search experience of the users. Moreover, features indicative of similar/different and easy/difficult re-finding tasks can be employed for building balanced test environments, which could address one of the main gaps in the re-finding context

    Supporting Exploratory Search Tasks Through Alternative Representations of Information

    Get PDF
    Information seeking is a fundamental component of many of the complex tasks presented to us, and is often conducted through interactions with automated search systems such as Web search engines. Indeed, the ubiquity of Web search engines makes information so readily available that people now often turn to the Web for all manners of information seeking needs. Furthermore, as the range of online information seeking tasks grows, more complex and open-ended search activities have been identified. One type of complex search activities that is of increasing interest to researchers is exploratory search, where the goal involves "learning" or "investigating", rather than simply "looking-up". Given the massive increase in information availability and the use of online search for tasks beyond simply looking-up, researchers have noted that it becomes increasingly challenging for users to effectively leverage the available online information for complex and open-ended search activities. One of the main limitations of the current document retrieval paradigm offered by modern search engines is that it provides a ranked list of documents as a response to the searcher’s query with no further support for locating and synthesizing relevant information. Therefore, the searcher is left to find and make sense of useful information in a massive information space that lacks any overview or conceptual organization. This thesis explores the impact of alternative representations of search results on user behaviors and outcomes during exploratory search tasks. Our inquiry is inspired by the premise that exploratory search tasks require sensemaking, and that sensemaking involves constructing and interacting with representations of knowledge. As such, in order to provide the searchers with more support in performing exploratory activities, there is a need to move beyond the current document retrieval paradigm by extending the support for locating and externalizing semantic information from textual documents and by providing richer representations of the extracted information coupled with mechanisms for accessing and interacting with the information in ways that support exploration and sensemaking. This dissertation presents a series of discrete research endeavour to explore different aspects of providing information and presenting this information in ways that both extraction and assimilation of relevant information is supported. We first address the problem of extracting information – that is more granular than documents – as a response to a user's query by developing a novel information extraction system to represent documents as a series of entity-relationship tuples. Next, through a series of designing and evaluating alternative representations of search results, we examine how this extracted information can be represented such that it extends the document-based search framework's support for exploratory search tasks. Finally, we assess the ecological validity of this research by exploring error-prone representations of search results and how they impact a searcher's ability to leverage our representations to perform exploratory search tasks. Overall, this research contributes towards designing future search systems by providing insights into the efficacy of alternative representations of search results for supporting exploratory search activities, culminating in a novel hybrid representation called Hierarchical Knowledge Graphs (HKG). To this end we propose and develop a framework that enables a reliable investigation of the impact of different representations and how they are perceived and utilized by information seekers
    corecore