19,243 research outputs found
Overview of the ImageCLEFphoto 2008 photographic retrieval task
ImageCLEFphoto 2008 is an ad-hoc photo retrieval task and part of the ImageCLEF
evaluation campaign. This task provides both the resources and the framework
necessary to perform comparative laboratory-style evaluation of visual information
retrieval systems. In 2008, the evaluation task concentrated on promoting diversity
within the top 20 results from a multilingual image collection. This new challenge
attracted a record number of submissions: a total of 24 participating groups
submitting 1,042 system runs. Some of the findings include that the choice of
annotation language is almost negligible and the best runs are by combining concept
and content-based retrieval methods
Semantic spaces revisited: investigating the performance of auto-annotation and semantic retrieval using semantic spaces
Semantic spaces encode similarity relationships between objects as a function of position in a mathematical space. This paper discusses three different formulations for building semantic spaces which allow the automatic-annotation and semantic retrieval of images. The models discussed in this paper require that the image content be described in the form of a series of visual-terms, rather than as a continuous feature-vector. The paper also discusses how these term-based models compare to the latest state-of-the-art continuous feature models for auto-annotation and retrieval
CIDI-Lung-Seg: A Single-Click Annotation Tool for Automatic Delineation of Lungs from CT Scans
Accurate and fast extraction of lung volumes from computed tomography (CT)
scans remains in a great demand in the clinical environment because the
available methods fail to provide a generic solution due to wide anatomical
variations of lungs and existence of pathologies. Manual annotation, current
gold standard, is time consuming and often subject to human bias. On the other
hand, current state-of-the-art fully automated lung segmentation methods fail
to make their way into the clinical practice due to their inability to
efficiently incorporate human input for handling misclassifications and praxis.
This paper presents a lung annotation tool for CT images that is interactive,
efficient, and robust. The proposed annotation tool produces an "as accurate as
possible" initial annotation based on the fuzzy-connectedness image
segmentation, followed by efficient manual fixation of the initial extraction
if deemed necessary by the practitioner. To provide maximum flexibility to the
users, our annotation tool is supported in three major operating systems
(Windows, Linux, and the Mac OS X). The quantitative results comparing our free
software with commercially available lung segmentation tools show higher degree
of consistency and precision of our software with a considerable potential to
enhance the performance of routine clinical tasks.Comment: 4 pages, 6 figures; to appear in the proceedings of 36th Annual
International Conference of the IEEE Engineering in Medicine and Biology
Society (EMBC 2014
Visual Landmark Recognition from Internet Photo Collections: A Large-Scale Evaluation
The task of a visual landmark recognition system is to identify photographed
buildings or objects in query photos and to provide the user with relevant
information on them. With their increasing coverage of the world's landmark
buildings and objects, Internet photo collections are now being used as a
source for building such systems in a fully automatic fashion. This process
typically consists of three steps: clustering large amounts of images by the
objects they depict; determining object names from user-provided tags; and
building a robust, compact, and efficient recognition index. To this date,
however, there is little empirical information on how well current approaches
for those steps perform in a large-scale open-set mining and recognition task.
Furthermore, there is little empirical information on how recognition
performance varies for different types of landmark objects and where there is
still potential for improvement. With this paper, we intend to fill these gaps.
Using a dataset of 500k images from Paris, we analyze each component of the
landmark recognition pipeline in order to answer the following questions: How
many and what kinds of objects can be discovered automatically? How can we best
use the resulting image clusters to recognize the object in a query? How can
the object be efficiently represented in memory for recognition? How reliably
can semantic information be extracted? And finally: What are the limiting
factors in the resulting pipeline from query to semantics? We evaluate how
different choices of methods and parameters for the individual pipeline steps
affect overall system performance and examine their effects for different query
categories such as buildings, paintings or sculptures
Hybrid image representation methods for automatic image annotation: a survey
In most automatic image annotation systems, images are represented with low level features using either global
methods or local methods. In global methods, the entire image is used as a unit. Local methods divide images into blocks where fixed-size sub-image blocks are adopted as sub-units; or into regions by using segmented regions as sub-units in images. In contrast to typical automatic image annotation methods that use either global or local features exclusively, several recent methods have considered incorporating the two kinds of information, and believe that the combination of the two levels of features is
beneficial in annotating images. In this paper, we provide a
survey on automatic image annotation techniques according to
one aspect: feature extraction, and, in order to complement
existing surveys in literature, we focus on the emerging image annotation methods: hybrid methods that combine both global and local features for image representation
- âŚ