10 research outputs found
Search Behaviour On Photo Sharing Platforms
The behaviour, goals, and intentions of users while searching for images in large scale online collections are not well understood, with image search log analysis providing limited insights, in part because they tend only to have access to user search and result click information. In this paper we study user search behaviour in a large photo-sharing platform, analyzing all user actions during search sessions (i.e. including post result-click pageviews). Search accounts for a significant part of user interactions with such platforms, and we show differences between the queries issued on such platforms and those on general image search. We show that search behaviour is influenced by the query type, and also depends on the user. Finally, we analyse how users behave when they reformulate their queries, and develop URL class prediction models for image search, showing that query-specific models significantly outperform query-agnostic models. The insights provided in this paper are intended as a launching point for the design of better interfaces and ranking models for image search. © 2013 IEEE.published_or_final_versio
Query Modification Patterns and Concept Analysis of Web Image Queries
ABSTRACT This study investigated query modification patterns and concepts used in query construction during users' searching images on the Web. It examined whether query modifications were related to different content collections, and analyzed what attributes were used to formulate a query in an interactive web searching process. Findings of the study show that query modification patterns were significantly associated with content collections. Terms related to format or specific objects to represent an image were found to be frequently used in reformulations and specializations. The findings suggest that system features for flexible query formulation and navigation would support users' image search process
Examining User Engagement Attributes in Visual Information Search
This study performs an exploratory factor analysis (EFA) to examine the user engagement scale (UES) in the setting of daily life visual information search (i.e. searching images and/or videos using Web-based information systems). Principal Components Factor Analysis was employed to examine the six sub-scales of user engagement while searching images and/or videos on Web-based systems. Results indicated that the most stable sub-scale is Aesthetics (AE) followed by the Focused Attention (FA), while Perceived Usability (PUs) retained two of eight items and Novelty (NO) retained one of three items. Items from Felt Involvement (FI) and Endurability (EN) merged with the remaining items from PUs or NO to form two Factors. A number of 91 college student users responded to an online administered questionnaire in two weeks duration. The findings showed that more than 57% of the users use Google as their first choice to search visual information and YouTube, social media, and special sites were also used for their daily life visual information search.ye
Journalistic image access : description, categorization and searching
The quantity of digital imagery continues to grow, creating a pressing need to develop efficient methods for organizing and retrieving images. Knowledge on user behavior in image description and search is required for creating effective and satisfying searching experiences. The nature of visual information and journalistic images creates challenges in representing and matching images with user needs.
The goal of this dissertation was to understand the processes in journalistic image access (description, categorization, and searching), and the effects of contextual factors on preferred access points. These were studied using multiple data collection and analysis methods across several studies. Image attributes used to describe journalistic imagery were analyzed based on description tasks and compared to a typology developed through a meta-analysis of literature on image attributes. Journalistic image search processes and query types were analyzed through a field study and multimodal image retrieval experiment. Image categorization was studied via sorting experiments leading to a categorization model. Advances to research methods concerning search tasks and categorization procedures were implemented.
Contextual effects on image access were found related to organizational contexts, work, and search tasks, as well as publication context. Image retrieval in a journalistic work context was contextual at the level of image needs and search process. While text queries, together with browsing, remained the key access mode to journalistic imagery, participants also used visual access modes in the experiment, constructing multimodal queries. Assigned search task type and searcher expertise had an effect on query modes utilized. Journalistic images were mostly described and queried for on the semantic level but also syntactic attributes were used. Constraining the description led to more abstract descriptions. Image similarity was evaluated mainly based on generic semantics. However, functionally oriented categories were also constructed, especially by domain experts. Availability of page context promoted thematic rather than object-based categorization.
The findings increase our understanding of user behavior in image description, categorization, and searching, as well as have implications for future solutions in journalistic image access. The contexts of image production, use, and search merit more interest in research as these could be leveraged for supporting annotation and retrieval. Multiple access points should be created for journalistic images based on image content and function. Support for multimodal query formulation should also be offered. The contributions of this dissertation may be used to create evaluation criteria for journalistic image access systems
Validating and Developing the User Engagement Scale in Web-based Visual Information Searching
Guided by the theoretical frameworks of interactive information searching and user engagement (UE), this study proposed sense discovery (SD) as a UE attribute and suggested a refined four-factor user engagement scale (UES) model for the measurement of users’ psychological involvement in web-based visual information searching. Using a mixed-methods approach based on a survey, this study confirmed the inter-item reliability of the original six-factor UES in three visual contexts—a general visual context, image searching on Google (ISG), and video searching on YouTube (VSY). Principal component analyses (PCA) partially confirmed the internal consistency of the original six UE subscales and suggested conceptual overlaps among four of six original subscales. Through thematic and sentiment analyses of the participants’ visual information needs, the study further explored their positive experience and categorized a total of eight items related to SD. Based on the findings, a refined four-factor UES model, which can be flexibly administered, is proposed to measure users’ psychological involvement in web-based visual information searching
Exploring the effectiveness of similarity-based visualisations for colour-based image retrieval
In April 2009, Google Images added a filter for narrowing search results by colour. Several other systems for searching image databases by colour were also released around this time. These colour-based image retrieval systems enable users to search image databases either by selecting colours from a graphical palette (i.e., query-by-colour), by drawing a representation of the colour layout sought (i.e., query-by-sketch), or both. It was comments left by readers of online articles describing these colour-based image retrieval systems that provided us with the inspiration for this research. We were surprised to learn that the underlying query-based technology used in colour-based image retrieval systems today remains remarkably similar to that of systems developed nearly two decades ago. Discovering this ageing retrieval approach, as well as uncovering a large user demographic requiring image search by colour, made us eager to research more effective approaches for colour-based image retrieval. In this thesis, we detail two user studies designed to compare the effectiveness of systems adopting similarity-based visualisations, query-based approaches, or a combination of both, for colour-based image retrieval. In contrast to query-based approaches, similarity-based visualisations display and arrange database images so that images with similar content are located closer together on screen than images with dissimilar content. This removes the need for queries, as users can instead visually explore the database using interactive navigation tools to retrieve images from the database. As we found existing evaluation approaches to be unreliable, we describe how we assessed and compared systems adopting similarity-based visualisations, query-based approaches, or both, meaningfully and systematically using our Mosaic Test - a user-based evaluation approach in which evaluation study participants complete an image mosaic of a predetermined target image using the colour-based image retrieval system under evaluation
Exploring the effectiveness of similarity-based visualisations for colour-based image retrieval
In April 2009, Google Images added a filter for narrowing search results by colour. Several other systems for searching image databases by colour were also released around this time. These colour-based image retrieval systems enable users to search image databases either by selecting colours from a graphical palette (i.e., query-by-colour), by drawing a representation of the colour layout sought (i.e., query-by-sketch), or both. It was comments left by readers of online articles describing these colour-based image retrieval systems that provided us with the inspiration for this research. We were surprised to learn that the underlying query-based technology used in colour-based image retrieval systems today remains remarkably similar to that of systems developed nearly two decades ago. Discovering this ageing retrieval approach, as well as uncovering a large user demographic requiring image search by colour, made us eager to research more effective approaches for colour-based image retrieval. In this thesis, we detail two user studies designed to compare the effectiveness of systems adopting similarity-based visualisations, query-based approaches, or a combination of both, for colour-based image retrieval. In contrast to query-based approaches, similarity-based visualisations display and arrange database images so that images with similar content are located closer together on screen than images with dissimilar content. This removes the need for queries, as users can instead visually explore the database using interactive navigation tools to retrieve images from the database. As we found existing evaluation approaches to be unreliable, we describe how we assessed and compared systems adopting similarity-based visualisations, query-based approaches, or both, meaningfully and systematically using our Mosaic Test - a user-based evaluation approach in which evaluation study participants complete an image mosaic of a predetermined target image using the colour-based image retrieval system under evaluation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Image search: an investigation of factors affecting search behaviour of users
Searching for images can be challenging. How users search for images is governed by their information need. Nevertheless, in fulfilling their information need, users are often affected by subjective factors. These factors include topic familiarity, task difficulty, relevance criteria and satisfaction. This thesis focuses on three research questions exploring how image information needs together with these factors affect online web users' searching behaviour. The questions are: 1. How does image information need affect the criteria users apply when selecting relevant images? 2. How do different factors in image retrieval affect users' image searching behaviour? 3. Can we identify image information needs solely from user queries? In addressing these challenges, we conducted both user studies and proxy log analysis to complement each other. User studies are conducted in a laboratory setting and the needs are artificial, while proxy log captures users' actual needs and behaviour in the wild. The main user study involved 48 students of various disciplines from RMIT University. In the study, we represent image information needs as types of tasks. Data were collected from questionnaires and screen capture recordings. The questionnaire was used to collect data on criteria users find important when judging image relevance and perception on the effects of subjective factors to their searching. Screen capture recordings of their search activities were observed and time stamped to identify and measure search and retrieval behaviour. These measures were used to evaluate the effects of subjective factors on users' image search behaviour. The results showed in judging image relevance, users may apply similar criteria, however, the importance of these criteria depend on the types of image. Similarly, ratings of users' perception on aspects of performing image search show they were task dependent and that effect of different aspects were related. Users were more affected by familiarity and satisfaction when performing difficult image search tasks. Results of correlation suggest that users' perception of aspects of performing image search did not always correspond with their actual search behaviour. However, for some subjective aspects of user search behaviour, we have identified particular objective measures that correlate well with that aspect. The examination of users' queries in proxy logs, shows that users search for unambiguous images more frequently compared to conceptual images. Their sessions are short with two to three terms per query. When analysing queries from logs, we are actually making a guess of what users were searching for. However, by examining the way users modify/reformulate their queries may give an indication of their information need. Results show, that users frequently submit new queries or replace terms from their previous query rather than revising the query into more depth or breadth. Similar findings were found when compared with the user study data, whereby users in both settings exhibit similarity in the number of queries, terms and reformulation type. This thesis concludes that given similar image information needs, ordinary users make relevance judgements similar to specialised users (such as journalists, art historians and medical doctors) despite giving attention to different criteria of relevance. Moreover, only certain measures of search behaviour used in text retrieval are applicable to image retrieval due to the difference in judging the relevance of textual information and image. In addition, visual information needs can be better inferred when analysing series of queries and their reformulation within a search session
Recommended from our members
Selecting and tailoring of images for online news content: a mixed-methods investigation of the needs and behaviour of image users in online journalism
This mixed-methods investigation explores how image professionals in online journalism search for, select and use images from large online collections. Further, findings from this exploration are used to devise and evaluate a needs-based practical solution for improvement to image retrieval.
The exploratory stage included semi-structured interviews and observations in situ and provided several important contributions to the current understanding of the needs and behaviour of image users in fully disintermediated environment of the online newsroom. This study found that these image users are creative professionals and self-taught, yet, confident image searchers. When illustrating news content, they apply a shared knowledge of how a specific image function (e.g., dominant image) must be presented visually to reach its full communication potential. This common understanding of image communicative functions has two implications on how these professionals search for and select images. Firstly, they begin searches with clear image needs pre-defined on multiple levels of image description, including visual image features, and their behaviour is consistent with targeted searching. This contradicts previously reported preference for browsing as the typical mode of searching in online image collections. Secondly, they do not easily compromise on image needs related to visual features. When searches prove ineffective, they resort to editing skills and tailor the available images to match their original needs.
Further, it was found that the choice of images for headline content can in fact be predicted by a set of 11 visual image features. The features were extracted from a collection of artefacts created in the observation sessions and described by means of the Visual Social Semiotics (VSS) framework. The feature set was implemented as a filtering mechanism in a prototype and evaluated in a within-subjects experimental design study with image professionals. This experiment showed a significant positive change in the behaviour of users when interacting with images pre-filtered strictly to their visual needs, not observed in the baseline system. This was demonstrated through users’ ability to immediately engage in the inspection of images on a level of detail, and to make straightforward selections. Images from the experimental sets required no or only minimal tailoring as confirmed in the final VSS-based survey with independent image experts.
Other important contributions of this investigation include the updated models. Firstly, the illustration task process framework, originally proposed in Markkula and Sormunen (2000), has been refined to include the image tailoring phase where creative professionals apply editorial treatment before publication. Further, the observations revealed that verifying of images, consistent with the feature in Ellis et al.’s model (Ellis et al., 1993), was an activity critical to making selection decision in online journalism. Therefore, Conniss et al.’s model of the image searching process (Conniss et al., 2000) has been updated to include the verifying phase.
The investigation concludes that in order to meet the needs of creative image professionals in online journalism, image retrieval systems must support targeted searching, and facilitate direct access to required images that can be easily verified for authenticity. The proposed multi-feature filtering system firmly rooted in the image users’ needs, appears to be a step towards automating image retrieval