27,596 research outputs found
Designing learning object repositories : a thesis presented in partial fulfilment of the requirements for the degree of Master of Information Science in Information Sciences at Massey University
Learning object repositories are expanding rapidly into the role of independent educational systems that not only are a supplement to a traditional way of learning, but also allow users to search, exchange and re-use learning objects. The intention of this innovative technology is to have such repositories to collect a database of learning objects catalogued by the learning content management system. However, for users to perform an efficient search, these learning objects would need to use metadata standards or specifications to describe their properties. For learning objects stored within the repositories, metadata standards are often used to descibe them so users of the respositories are able to find the accurate resources they required, hence metadata standards are important elements of any learning object repository. In this paper, a courseware example is used to demonstrate how to define a set of characteristics that we want to describe for our courseware, and attempt to map the data schema in the database with the available metadata standards. The outcome is to identify a set of metadata elements that would fully describe our learning objects stored within the learning object repository, and these metadata elements will also assist instructors to create adaptable courseware that can be reused by different instructors. Metadata standard is known as a critical element for the management of learning objects, not only will it increase the accuracy of the search results, it will also provide more relevant and descriptive information about the learning objects to the searchers
Anveshak - A Groundtruth Generation Tool for Foreground Regions of Document Images
We propose a graphical user interface based groundtruth generation tool in
this paper. Here, annotation of an input document image is done based on the
foreground pixels. Foreground pixels are grouped together with user interaction
to form labeling units. These units are then labeled by the user with the user
defined labels. The output produced by the tool is an image with an XML file
containing its metadata information. This annotated data can be further used in
different applications of document image analysis.Comment: Accepted in DAR 201
The Robust Reading Competition Annotation and Evaluation Platform
The ICDAR Robust Reading Competition (RRC), initiated in 2003 and
re-established in 2011, has become a de-facto evaluation standard for robust
reading systems and algorithms. Concurrent with its second incarnation in 2011,
a continuous effort started to develop an on-line framework to facilitate the
hosting and management of competitions. This paper outlines the Robust Reading
Competition Annotation and Evaluation Platform, the backbone of the
competitions. The RRC Annotation and Evaluation Platform is a modular
framework, fully accessible through on-line interfaces. It comprises a
collection of tools and services for managing all processes involved with
defining and evaluating a research task, from dataset definition to annotation
management, evaluation specification and results analysis. Although the
framework has been designed with robust reading research in mind, many of the
provided tools are generic by design. All aspects of the RRC Annotation and
Evaluation Framework are available for research use.Comment: 6 pages, accepted to DAS 201
Multi modal multi-semantic image retrieval
PhDThe rapid growth in the volume of visual information, e.g. image, and video can
overwhelm users’ ability to find and access the specific visual information of interest
to them. In recent years, ontology knowledge-based (KB) image information retrieval
techniques have been adopted into in order to attempt to extract knowledge from these
images, enhancing the retrieval performance. A KB framework is presented to
promote semi-automatic annotation and semantic image retrieval using multimodal
cues (visual features and text captions). In addition, a hierarchical structure for the KB
allows metadata to be shared that supports multi-semantics (polysemy) for concepts.
The framework builds up an effective knowledge base pertaining to a domain specific
image collection, e.g. sports, and is able to disambiguate and assign high level
semantics to ‘unannotated’ images.
Local feature analysis of visual content, namely using Scale Invariant Feature
Transform (SIFT) descriptors, have been deployed in the ‘Bag of Visual Words’
model (BVW) as an effective method to represent visual content information and to
enhance its classification and retrieval. Local features are more useful than global
features, e.g. colour, shape or texture, as they are invariant to image scale, orientation
and camera angle. An innovative approach is proposed for the representation,
annotation and retrieval of visual content using a hybrid technique based upon the use
of an unstructured visual word and upon a (structured) hierarchical ontology KB
model. The structural model facilitates the disambiguation of unstructured visual
words and a more effective classification of visual content, compared to a vector
space model, through exploiting local conceptual structures and their relationships.
The key contributions of this framework in using local features for image
representation include: first, a method to generate visual words using the semantic
local adaptive clustering (SLAC) algorithm which takes term weight and spatial
locations of keypoints into account. Consequently, the semantic information is
preserved. Second a technique is used to detect the domain specific ‘non-informative
visual words’ which are ineffective at representing the content of visual data and
degrade its categorisation ability. Third, a method to combine an ontology model with
xi
a visual word model to resolve synonym (visual heterogeneity) and polysemy
problems, is proposed. The experimental results show that this approach can discover
semantically meaningful visual content descriptions and recognise specific events,
e.g., sports events, depicted in images efficiently.
Since discovering the semantics of an image is an extremely challenging problem, one
promising approach to enhance visual content interpretation is to use any associated
textual information that accompanies an image, as a cue to predict the meaning of an
image, by transforming this textual information into a structured annotation for an
image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct
types of information representation and modality, there are some strong, invariant,
implicit, connections between images and any accompanying text information.
Semantic analysis of image captions can be used by image retrieval systems to
retrieve selected images more precisely. To do this, a Natural Language Processing
(NLP) is exploited firstly in order to extract concepts from image captions. Next, an
ontology-based knowledge model is deployed in order to resolve natural language
ambiguities. To deal with the accompanying text information, two methods to extract
knowledge from textual information have been proposed. First, metadata can be
extracted automatically from text captions and restructured with respect to a semantic
model. Second, the use of LSI in relation to a domain-specific ontology-based
knowledge model enables the combined framework to tolerate ambiguities and
variations (incompleteness) of metadata. The use of the ontology-based knowledge
model allows the system to find indirectly relevant concepts in image captions and
thus leverage these to represent the semantics of images at a higher level.
Experimental results show that the proposed framework significantly enhances image
retrieval and leads to narrowing of the semantic gap between lower level machinederived
and higher level human-understandable conceptualisation
- …