150 research outputs found

    MapSnapper: Engineering an Efficient Algorithm for Matching Images of Maps from Mobile Phones

    No full text
    The MapSnapper project aimed to develop a system for robust matching of low-quality images of a paper map taken from a mobile phone against a high quality digital raster representation of the same map. The paper presents a novel methodology for performing content-based image retrieval and object recognition from query images that have been degraded by noise and subjected to transformations through the imaging system. In addition the paper also provides an insight into the evaluation-driven development process that was used to incrementally improve the matching performance until the design specifications were met

    Automatic Palaeographic Exploration of Genizah Manuscripts

    Get PDF
    The Cairo Genizah is a collection of hand-written documents containing approximately 350,000 fragments of mainly Jewish texts discovered in the late 19th century. The fragments are today spread out in some 75 libraries and private collections worldwide, but there is an ongoing effort to document and catalogue all extant fragments. Palaeographic information plays a key role in the study of the Genizah collection. Script style, and–more specifically–handwriting, can be used to identify fragments that might originate from the same original work. Such matched fragments, commonly referred to as “joins”, are currently identified manually by experts, and presumably only a small fraction of existing joins have been discovered to date. In this work, we show that automatic handwriting matching functions, obtained from non-specific features using a corpus of writing samples, can perform this task quite reliably. In addition, we explore the problem of grouping various Genizah documents by script style, without being provided any prior information about the relevant styles. The automatically obtained grouping agrees, for the most part, with the palaeographic taxonomy. In cases where the method fails, it is due to apparent similarities between related scripts

    Scalable Object Recognition Using Hierarchical Quantization with a Vocabulary Tree

    Get PDF
    An image retrieval technique employing a novel hierarchical feature/descriptor vector quantizer tool—‘vocabulary tree’, of sorts comprising hierarchically organized sets of feature vectors—that effectively partitions feature space in a hierarchical manner, creating a quantized space that is mapped to integer encoding. The computerized implementation of the new technique(s) employs subroutine components, such as: A trainer component of the tool generates a hierarchical quantizer, Q, for application/use in novel image-insertion and image-query stages. The hierarchical quantizer, Q, tool is generated by running k-means on the feature (a/k/a descriptor) space, recursively, on each of a plurality of nodes of a resulting quantization level to ‘split’ each node of each resulting quantization level. Preferably, training of the hierarchical quantizer, Q, is performed in an ‘offline’ fashion

    Feature extraction for range image interpretation using local topology statistics

    Get PDF
    This thesis presents an approach for interpreting range images of known subject matter, such as the human face, based on the extraction and matching of local features from the images. In recent years, approaches to interpret two-dimensional (2D) images based on local feature extraction have advanced greatly, for example, systems such as Scale Invariant Feature Transform (SIFT) can detect and describe the local features in the 2D images effectively. With the aid of rapidly advancing three-dimensional (3D) imaging technology, in particular, the advent of commercially available surface scanning systems based on photogrammetry, image representation has been able to extend into the third dimension. Moreover, range images confer a number of advantages over conventional 2D images, for instance, the properties of being invariant to lighting, pose and viewpoint changes. As a result, an attempt has been made in this work to establish how best to represent the local range surface with a feature descriptor, thereby developing a matching system that takes advantages of the third dimension present in the range images and casting this in the framework of an existing scale and rotational invariance recognition technology: SIFT. By exploring the statistical representations of the local variation, it is possible to represent and match range images of human faces. This can be achieved by extracting unique mathematical keys known as feature descriptors, from the various automatically generated stable keypoint locations of the range images, thereby capturing the local information of the distributions of the mixes of surface types and their orientations simultaneously. Keypoints are generated through scale-space approach, where the (x,y) location and the appropriate scale (sigma) are detected. In order to achieve invariance to in-plane viewpoint rotational changes, a consistent canonical orientation is assigned to each keypoint and the sampling patch is rotated to this canonical orientation. The mixes of surface types, derived using the shape index, and the image gradient orientations are extracted from each sampling patch by placing nine overlapping Gaussian sub-regions over the measurement aperture. Each of the nine regions is overlapped by one standard deviation in order to minimise the occurrence of spatial aliasing during the sampling stages and to provide a better continuity within the descriptor. Moreover, surface normals can be computed from each of the keypoint location, allowing the local 3D pose to be estimated and corrected within the feature descriptors since the orientations in which the images were captured are unknown a priori. As a result, the formulated feature descriptors have strong discriminative power and are stable to rotational changes

    Pattern Discovery for Object Categorization

    Get PDF
    This paper presents a new approach for the object categorization problem. Our model is based on the successful ‘bag of words ’ approach. However, unlike the original model, image features (keypoints) are not seen as independent and orderless. Instead, our model attempts to discover intermediate representations for each object class. This approach works by partitioning the image into smaller regions then computing the spatial relationships between all of the informative image keypoints in the region. The results show that the inclusion of spatial relationships leads to a measurable increase in performance for two of the most challenging datasets

    TRECVid 2011 Experiments at Dublin City University

    Get PDF
    This year the iAd-DCU team participated in three of the assigned TRECVid 2011 tasks; Semantic Indexing (SIN), Interactive Known-Item Search (KIS) and Multimedia Event Detection (MED). For the SIN task we presented three full runs using global features, local features and fusion of global, local features and relationships between concepts respectively. The evaluation results show that local features achieve better performance, with marginal gains found when introducing global features and relationships between concepts. With regard to our KIS submission, similar to our 2010 KIS experiments, we have implemented an iPad interface to a KIS video search tool. The aim of this year’s experimentation was to evaluate different display methodologies for KIS interaction. For this work, we integrate a clustering element for keyframes, which operates over MPEG-7 features using k-means clustering. In addition, we employ concept detection, not simply for search, but as a means of choosing most representative keyframes for ranked items. For our experiments we compare the baseline non-clustering system to a clustering system on a topic by topic basis. Finally, for the first time this year the iAd group at DCU has been involved in the MED Task. Two techniques are compared, employing low-level features directly and using concepts as intermediate representations. Evaluation results show promising initial results when performing event detection using concepts as intermediate representations
    • 

    corecore