112 research outputs found

    Information Outlook, April/May 2009

    Get PDF
    Volume 13, Issue 3https://scholarworks.sjsu.edu/sla_io_2009/1002/thumbnail.jp

    An Operational Utility Assessment: Measuring the Effectiveness of the Joint Concept Technology Demonstration (JCTD), Joint Forces Protection Advance Security System (JFPASS)

    Get PDF
    Sponsored Report (for Acquisition Research Program)Planning modern military operations requires an accurate intelligence assessment of potential threats, combined with a detailed assessment of the physical theater of operations. This information can then be combined with equipment and manpower resources to set up a logistically supportable operation that mitigates as much of the enemy threat as possible. Given such a daunting challenge, military planners often turn to intelligent software agents to support their efforts. The success of the mission often hinges on the accuracy of these plans and the integrity of the security umbrella provided. The purpose of this project is to provide a comprehensive assessment of the Joint Forces Protection Advanced Security System (JFPASS) Joint Concept Technology Demonstration (JCTD) to better meet force-protection needs. It will also address the adaptability of this technology to an ever-changing enemy threat by the use of intelligent software. This project will collect and analyze data pertaining to the research, development, testing, and effectiveness of the JFPASS and develop an operational effectiveness model to quantify overall system performance.Naval Postgraduate School Acquisition Research ProgramApproved for public release; distribution is unlimited

    Users, Queries, and Bad Abandonment in Web Search

    Get PDF
    After a user submits a query and receives a list of search results, the user may abandon their query without clicking on any of the search results. A bad query abandonment is when a searcher abandons the SERP because they were dissatisfied with the quality of the search results, often making the user reformulate their query in the hope of receiving better search results. As we move closer to understanding when and what causes a user to abandon their query under different qualities of search results, we move forward in an overall understanding of user behavior with search engines. In this thesis, we describe three user studies to investigate bad query abandonment. First, we report on a study to investigate the rate and time at which users abandon their queries at different levels of search quality. We had users search for answers to questions, but showed users manipulated SERPs that contain one relevant document placed at different ranks. We show that as the quality of search results decreases, the probability of abandonment increases, and that users quickly decide to abandon their queries. Users make their decisions fast, but not all users are the same. We show that there appear to be two types of users that behave differently, with one group more likely to abandon their query and are quicker in finding answers than the group less likely to abandon their query. Second, we describe an eye-tracking experiment that focuses on understanding possible causes of users' willingness to examine SERPs and what motivates users to continue or discontinue their examination. Using eye-tracking data, we found that a user deciding to abandon a query is best understood by the user's examination pattern not including a relevant search result. If a user sees a relevant result, they are very likely to click it. However, users' examination of results are different and may be influenced by other factors. The key factors we found are the rank of search results, the user type, and the query quality. For example, we show that regardless of where the relevant document is placed in the SERP, the type of query submitted affects examination, and if a user enters an ambiguous query, they are likely to examine fewer results. Third, we show how the nature of non-relevant material affects users' willingness to further explore a ranked list of search results. We constructed and showed participants manipulated SERPs with different types of non-relevant documents. We found that user examination of search results and time to query abandonment is influenced by the coherence and type of non-relevant documents included in the SERP. For SERPs coherent on off-topic results, users spend the least amount of time before abandoning and are less likely to request to view more results. The time they spend increases as the SERP quality improves, and users are more likely to request to view more results when the SERP contains diversified non-relevant results on multiple subtopics

    Evaluating Information Retrieval and Access Tasks

    Get PDF
    This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, today’s smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students—anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one
    • …
    corecore