28,663 research outputs found

    Interactive Symptom Elicitation for Diagnostic Information Retrieval

    Get PDF
    Medical information retrieval suffers from a dual problem: users struggle in describing what they are experiencing from a medical perspective and the search engine is struggling in retrieving the information exactly matching what users are experiencing. We demonstrate interactive symptom elicitation for diagnostic information retrieval. Interactive symptom elicitation builds a model from the user's initial description of the symptoms and interactively elicitates new information about symptoms by posing questions of related, but uncertain, symptoms for the user. As a result, the system interactively learns the estimates of symptoms while controlling the uncertainties related to the diagnostic process. The learned model is then used to rank the associated diagnoses that the user might be experiencing. Our preliminary experimental results show that interactive symptom elicitation can significantly improve user's capability to describe their symptoms, increase the confidence of the model, and enable effective diagnostic information retrieval.Peer reviewe

    Query-dependent metric learning for adaptive, content-based image browsing and retrieval

    Get PDF

    A study of search intermediary working notes: implications for IR system design

    Get PDF
    This paper reports findings from an exploratory study investigating working notes created during encoding and external storage (EES) processes, by human search intermediates using a Boolean information retrieval (JR) system. EES processes have been an important area of research in educational contexts where students create and use notes to facilitate learning. In the context of interactive IR, encoding can be conceptualized as the process of creating working notes to help in the understanding and translating a user's information problem into a search strategy suitable for use with an IR system. External storage is the process of using working notes to facilitate interaction with IR systems. Analysis of 221 sets of working notes created by human search intermediaries revealed extensive use of EES processes and the creation of working notes of textual, numerical and graphical entities. Nearly 70% of recorded working notes were textual/numerical entities, nearly 30% were graphical entities and 0.73% were indiscernible. Segmentation devices were also used in 48% of the working notes. The creation of working notes during EES processes was a fundamental element within the mediated, interactive IR process. Implications for the design of IR interfaces to support users' EES processes and further research is discussed

    Continued effects of context reinstatement in recognition

    Get PDF
    The context reinstatement effect refers to the enhanced memory performance found when the context information paired with a target item at study is re-Ā­ā€presented at test. Here we investigated the consequences of the way context information is processed in such a setting that gives rise to its beneficial effect on item recognition memory. Specifically, we assessed whether reinstating context in a recognition test facilitates subsequent memory for this context beyond facilitation conferred by presentation of the same context with a different study item. Reinstating study context at test led to better accuracy in 2-Ā­ā€alternative forced choice recognition for target faces than did re-Ā­ā€pairing those faces with another context encountered during the study phase. The advantage for reinstated over re-Ā­ā€paired conditions occurred for both within (Experiment 1) and between subjects (Experiment 2) manipulations. Critically, in a subsequent recognition test for the contexts themselves, contexts serving previously in the reinstated condition were recognized better than contexts serving previously in the re-Ā­ā€paired context condition. This constitutes the first demonstration of continuous effects of context reinstatement for memory for context

    Associating characters with events in films

    Get PDF
    The work presented here combines the analysis of a film's audiovisual features with the analysis of an accompanying audio description. Specifically, we describe a technique for semantic-based indexing of feature films that associates character names with meaningful events. The technique fuses the results of event detection based on audiovisual features with the inferred on-screen presence of characters, based on an analysis of an audio description script. In an evaluation with 215 events from 11 films, the technique performed the character detection task with Precision = 93% and Recall = 71%. We then go on to show how novel access modes to film content are enabled by our analysis. The specific examples illustrated include video retrieval via a combination of event-type and character name and our first steps towards visualization of narrative and character interplay based on characters occurrence and co-occurrence in events

    K-Space at TRECVid 2007

    Get PDF
    In this paper we describe K-Space participation in TRECVid 2007. K-Space participated in two tasks, high-level feature extraction and interactive search. We present our approaches for each of these activities and provide a brief analysis of our results. Our high-level feature submission utilized multi-modal low-level features which included visual, audio and temporal elements. Specific concept detectors (such as Face detectors) developed by K-Space partners were also used. We experimented with different machine learning approaches including logistic regression and support vector machines (SVM). Finally we also experimented with both early and late fusion for feature combination. This year we also participated in interactive search, submitting 6 runs. We developed two interfaces which both utilized the same retrieval functionality. Our objective was to measure the effect of context, which was supported to different degrees in each interface, on user performance. The first of the two systems was a ā€˜shotā€™ based interface, where the results from a query were presented as a ranked list of shots. The second interface was ā€˜broadcastā€™ based, where results were presented as a ranked list of broadcasts. Both systems made use of the outputs of our high-level feature submission as well as low-level visual features
    • ā€¦
    corecore