391,216 research outputs found

    K-Space Interactive Search

    Get PDF
    In this paper we will present the K-Space1 Interactive Search system for content-based video information retrieval to be demonstrated in the VideOlympics. This system is an exten-sion of the system we developed as part of our participation in TRECVID 2007 [1]. In TRECVID 2007 we created two interfaces, known as the ‘Shot’ based and ‘Broadcast’ based interfaces. Our VideOlympics submission takes these two in-terfaces and the lessons learned from our user experiments, to create a single user interface which attempts to leverage the best aspects of both

    K-Space at TRECVid 2007

    Get PDF
    In this paper we describe K-Space participation in TRECVid 2007. K-Space participated in two tasks, high-level feature extraction and interactive search. We present our approaches for each of these activities and provide a brief analysis of our results. Our high-level feature submission utilized multi-modal low-level features which included visual, audio and temporal elements. Specific concept detectors (such as Face detectors) developed by K-Space partners were also used. We experimented with different machine learning approaches including logistic regression and support vector machines (SVM). Finally we also experimented with both early and late fusion for feature combination. This year we also participated in interactive search, submitting 6 runs. We developed two interfaces which both utilized the same retrieval functionality. Our objective was to measure the effect of context, which was supported to different degrees in each interface, on user performance. The first of the two systems was a ‘shot’ based interface, where the results from a query were presented as a ranked list of shots. The second interface was ‘broadcast’ based, where results were presented as a ranked list of broadcasts. Both systems made use of the outputs of our high-level feature submission as well as low-level visual features

    K-Space at TRECVid 2008

    Get PDF
    In this paper we describe K-Space’s participation in TRECVid 2008 in the interactive search task. For 2008 the K-Space group performed one of the largest interactive video information retrieval experiments conducted in a laboratory setting. We had three institutions participating in a multi-site multi-system experiment. In total 36 users participated, 12 each from Dublin City University (DCU, Ireland), University of Glasgow (GU, Scotland) and Centrum Wiskunde & Informatica (CWI, the Netherlands). Three user interfaces were developed, two from DCU which were also used in 2007 as well as an interface from GU. All interfaces leveraged the same search service. Using a latin squares arrangement, each user conducted 12 topics, leading in total to 6 runs per site, 18 in total. We officially submitted for evaluation 3 of these runs to NIST with an additional expert run using a 4th system. Our submitted runs performed around the median. In this paper we will present an overview of the search system utilized, the experimental setup and a preliminary analysis of our results

    K-Space at TRECVID 2008

    Get PDF
    In this paper we describe K-Space’s participation in TRECVid 2008 in the interactive search task. For 2008 the K-Space group performed one of the largest interactive video information retrieval experiments conducted in a laboratory setting. We had three institutions participating in a multi-site multi-system experiment. In total 36 users participated, 12 each from Dublin City University (DCU, Ireland), University of Glasgow (GU, Scotland) and Centrum Wiskunde and Informatica (CWI, the Netherlands). Three user interfaces were developed, two from DCU which were also used in 2007 as well as an interface from GU. All interfaces leveraged the same search service. Using a latin squares arrangement, each user conducted 12 topics, leading in total to 6 runs per site, 18 in total. We officially submitted for evaluation 3 of these runs to NIST with an additional expert run using a 4th system. Our submitted runs performed around the median. In this paper we will present an overview of the search system utilized, the experimental setup and a preliminary analysis of our results

    Measuring the impact of temporal context on video retrieval

    Get PDF
    In this paper we describe the findings from the K-Space interactive video search experiments in TRECVid 2007, which examined the effects of including temporal context in video retrieval. The traditional approach to presenting video search results is to maximise recall by offering a user as many potentially relevant shots as possible within a limited amount of time. ‘Context’-oriented systems opt to allocate a portion of theresults presentation space to providing additional contextual cues about the returned results. In video retrieval these cues often include temporal information such as a shot’s location within the overall video broadcast and/or its neighbouring shots. We developed two interfaces with identical retrieval functionality in order to measure the effects of such context on user performance. The first system had a ‘recall-oriented’ interface, where results from a query were presented as a ranked list of shots. The second was ‘contextoriented’, with results presented as a ranked list of broadcasts. 10 users participated in the experiments, of which 8 were novices and 2 experts. Participants completed a number of retrieval topics using both the recall-oriented and context-oriented systems

    What is the Meaning in This? Teachers\u27 Propensity to Search for Meaning in Life During COVID-19 and the Role of Meaningful Work

    Get PDF
    The global COVID-19 pandemic has presented notable challenges in teachers’ career paths. In the present study, Super’s life-span, life-space theory was applied to examine the interplay between K-12 teachers’ propensity to search for meaning in life and meaningfulness attributed to their work role (i.e., meaningful work) in predicting career-relevant outcomes in the face of challenging circumstances over the course of a semester. A model was proposed in which propensity to search for meaning in life led to better work and career outcomes, an effect moderated by meaningful work. Longitudinal data from a sample of 617 teachers over eight outcome measurement timepoints across the fall 2020 semester was leveraged to test the model using a latent growth curve modeling approach. Meaningful work was positively related to self rated job performance and intrinsic work motivation, an effect that was stable over time. Interactive effects between propensity to search for meaning in life and meaningful work were found for intrinsic work motivation and occupational turnover intentions. At low meaningful work, those with higher propensity to search for meaning in life had higher intrinsic work motivation at the start of the semester and over time than those with low propensity to search for meaning. At high meaningful work, those with higher propensity to search for meaning in life had higher occupational turnover intentions than those with low propensity to search for meaning. Important implications for our understanding of meaning-making regarding roles in the life-space during challenging circumstances in the life-span and the practical applications of these findings for professions, organizations, and leaders are discussed

    Relevant clouds: leveraging relevance feedback to build tag clouds for image search

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-40802-1_18Previous work in the literature has been aimed at exploring tag clouds to improve image search and potentially increase retrieval performance. However, to date none has considered the idea of building tag clouds derived from relevance feedback. We propose a simple approach to such an idea, where the tag cloud gives more importance to the words from the relevant images than the non-relevant ones. A preliminary study with 164 queries inspected by 14 participants over a 30M dataset of automatically annotated images showed that 1) tag clouds derived this way are found to be informative: users considered roughly 20% of the presented tags to be relevant for any query at any time; and 2) the importance given to the tags correlates with user judgments: tags ranked in the first positions tended to be perceived more often as relevant to the topic that users had in mind.Work supported by EU FP7/2007-2013 under grant agreements 600707 (tranScriptorium) and 287576 (CasMaCat), and by the STraDA project (TIN2012-37475-C02-01).Leiva Torres, LA.; Villegas Santamaría, M.; Paredes Palacios, R. (2013). Relevant clouds: leveraging relevance feedback to build tag clouds for image search. En Information Access Evaluation. Multilinguality, Multimodality, and Visualization. Springer Verlag (Germany). 143-149. https://doi.org/10.1007/978-3-642-40802-1_18S143149Begelman, G., Keller, P., Smadja, F.: Automated tag clustering: Improving search and exploration in the tag space. In: Collaborative Web Tagging (2006)Callegari, J., Morreale, P.: Assessment of the utility of tag clouds for faster image retrieval. In: Proc. MIR (2010)Ganchev, K., Hall, K., McDonald, R., Petrov, S.: Using search-logs to improve query tagging. In: Proc. ACL (2012)Hassan-Montero, Y., Herrero-Solana, V.: Improving tag-clouds as visual information retrieval interfaces. In: Proc. InSciT (2006)Leiva, L.A., Villegas, M., Paredes, R.: Query refinement suggestion in multimodal interactive image retrieval. In: Proc. ICMI (2011)Liu, D., Hua, X.-S., Yang, L., Wang, M., Zhang, H.-J.: Tag ranking. In: Proc. WWW (2009)Overell, S., Sigurbjörnsson, B., van Zwol, R.: Classifying tags using open content resources. In: Proc. WSDM (2009)Rui, Y., Huang, T.S., Ortega, M., Mehrotra, S.: Relevance feedback: A power tool for interactive content-based image retrieval. T. Circ. Syst. Vid. 8(5) (1998)Sigurbjörnsson, B., van Zwol, R.: Flickr tag recommendation based on collective knowledge. In: Proc. WWW (2008)Trattner, C., Lin, Y.-L., Parra, D., Yue, Z., Real, W., Brusilovsky, P.: Evaluating tag-based information access in image collections. In: Proc. HT (2012)Villegas, M., Paredes, R.: Image-text dataset generation for image annotation and retrieval. In: Proc. CERI (2012)Zhang, C., Chai, J.Y., Jin, R.: User term feedback in interactive text-based image retrieval. In: Proc. SIGIR (2005
    corecore