62,866 research outputs found
Visual intelligence for online communities : commonsense image retrieval by query expansion
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.Includes bibliographical references (leaves 65-67).This thesis explores three weaknesses of keyword-based image retrieval through the design and implementation of an actual image retrieval system. The first weakness is the requirement of heavy manual annotation of keywords for images. We investigate this weakness by aggregating the annotations of an entire community of users to alleviate the annotation requirements on the individual user. The second weakness is the hit-or-miss nature of exact keyword matching used in many existing image retrieval systems. We explore this weakness by using linguistics tools (WordNet and the OpenMind Commonsense database) to locate image keywords in a semantic network of interrelated concepts so that retrieval by keywords is automatically expanded semantically to avoid the hit-or-miss problem. Such semantic query expansion further alleviates the requirement for exhaustive manual annotation. The third weakness of keyword-based image retrieval systems is the lack of support for retrieval by subjective content. We investigate this weakness by creating a mechanism to allow users to annotate images by their subjective emotional content and subsequently to retrieve images by these emotions. This thesis is primarily an exploration of different keyword-based image retrieval techniques in a real image retrieval system. The design of the system is grounded in past research that sheds light onto how people actually encounter the task of describing images with words for future retrieval. The image retrieval system's front-end and back- end are fully integrated with the Treehouse Global Studio online community - an online environment with a suite of media design tools and database storage of media files and metadata.(cont.) The focus of the thesis is on exploring new user scenarios for keyword-based image retrieval rather than quantitative assessment of retrieval effectiveness. Traditional information retrieval evaluation metrics are discussed but not pursued. The user scenarios for our image retrieval system are analyzed qualitatively in terms of system design and how they facilitate the overall retrieval experience.James Jian Dai.S.M
Users' effectiveness and satisfaction for image retrieval
This paper presents results from an initial user
study exploring the relationship between system
effectiveness as quantified by traditional
measures such as precision and recall, and users’
effectiveness and satisfaction of the results. The
tasks involve finding images for recall-based
tasks. It was concluded that no direct relationship
between system effectiveness and users’
performance could be proven (as shown by
previous research). People learn to adapt to a
system regardless of its effectiveness. This study
recommends that a combination of attributes
(e.g. system effectiveness, user performance and
satisfaction) is a more effective way to evaluate
interactive retrieval systems. Results of this
study also reveal that users are more concerned
with accuracy than coverage of the search
results
Can a workspace help to overcome the query formulation problem in image retrieval?
We have proposed a novel image retrieval system that incorporates a workspace where users can organise their search results. A task-oriented and user-centred experiment has been devised involving design professionals and several types of realistic search tasks. We study the workspace’s effect on two aspects: task conceptualisation and query formulation. A traditional relevance feedback system serves as baseline. The results of this study show that the workspace is more useful with respect to both of the above aspects. The proposed approach leads to a more effective and enjoyable search experience
An adaptive technique for content-based image retrieval
We discuss an adaptive approach towards Content-Based Image Retrieval. It is based on the Ostensive Model of developing information needs—a special kind of relevance feedback model that learns from implicit user feedback and adds a temporal notion to relevance. The ostensive approach supports content-assisted browsing through visualising the interaction by adding user-selected images to a browsing path, which ends with a set of system recommendations. The suggestions are based on an adaptive query learning scheme, in which the query is learnt from previously selected images. Our approach is an adaptation of the original Ostensive Model based on textual features only, to include content-based features to characterise images. In the proposed scheme textual and colour features are combined using the Dempster-Shafer theory of evidence combination. Results from a user-centred, work-task oriented evaluation show that the ostensive interface is preferred over a traditional interface with manual query facilities. This is due to its ability to adapt to the user's need, its intuitiveness and the fluid way in which it operates. Studying and comparing the nature of the underlying information need, it emerges that our approach elicits changes in the user's need based on the interaction, and is successful in adapting the retrieval to match the changes. In addition, a preliminary study of the retrieval performance of the ostensive relevance feedback scheme shows that it can outperform a standard relevance feedback strategy in terms of image recall in category search
TRECVID: benchmarking the effectiveness of information retrieval tasks on digital video
Many research groups worldwide are now investigating techniques which can support information retrieval on archives of digital video and as groups move on to implement these techniques they inevitably try to evaluate the performance of their techniques in practical situations. The difficulty with doing this is that there is no test collection or any environment in which the effectiveness of video IR or video IR sub-tasks, can be evaluated and compared. The annual series of TREC exercises has, for over a decade, been benchmarking the effectiveness of systems in carrying out various information retrieval tasks on text and audio and has contributed to a huge improvement in many of these. Two years ago, a track was introduced which covers shot boundary detection, feature extraction and searching through archives of digital video. In this paper we present a summary of the activities in the TREC Video track in 2002 where 17 teams from across the world took part
Recent Developments in Cultural Heritage Image Databases: Directions for User-Centered Design
published or submitted for publicatio
Aesthetic-Driven Image Enhancement by Adversarial Learning
We introduce EnhanceGAN, an adversarial learning based model that performs
automatic image enhancement. Traditional image enhancement frameworks typically
involve training models in a fully-supervised manner, which require expensive
annotations in the form of aligned image pairs. In contrast to these
approaches, our proposed EnhanceGAN only requires weak supervision (binary
labels on image aesthetic quality) and is able to learn enhancement operators
for the task of aesthetic-based image enhancement. In particular, we show the
effectiveness of a piecewise color enhancement module trained with weak
supervision, and extend the proposed EnhanceGAN framework to learning a deep
filtering-based aesthetic enhancer. The full differentiability of our image
enhancement operators enables the training of EnhanceGAN in an end-to-end
manner. We further demonstrate the capability of EnhanceGAN in learning
aesthetic-based image cropping without any groundtruth cropping pairs. Our
weakly-supervised EnhanceGAN reports competitive quantitative results on
aesthetic-based color enhancement as well as automatic image cropping, and a
user study confirms that our image enhancement results are on par with or even
preferred over professional enhancement
- …