201 research outputs found
Shape-based image retrieval in iconic image databases.
by Chan Yuk Ming.Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.Includes bibliographical references (leaves 117-124).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Content-based Image Retrieval --- p.3Chapter 1.2 --- Designing a Shape-based Image Retrieval System --- p.4Chapter 1.3 --- Information on Trademark --- p.6Chapter 1.3.1 --- What is a Trademark? --- p.6Chapter 1.3.2 --- Search for Conflicting Trademarks --- p.7Chapter 1.3.3 --- Research Scope --- p.8Chapter 1.4 --- Information on Chinese Cursive Script Character --- p.9Chapter 1.5 --- Problem Definition --- p.9Chapter 1.6 --- Contributions --- p.11Chapter 1.7 --- Thesis Organization --- p.13Chapter 2 --- Literature Review --- p.14Chapter 2.1 --- Trademark Retrieval using QBIC Technology --- p.14Chapter 2.2 --- STAR --- p.16Chapter 2.3 --- ARTISAN --- p.17Chapter 2.4 --- Trademark Retrieval using a Visually Salient Feature --- p.18Chapter 2.5 --- Trademark Recognition using Closed Contours --- p.19Chapter 2.6 --- Trademark Retrieval using a Two Stage Hierarchy --- p.19Chapter 2.7 --- Logo Matching using Negative Shape Features --- p.21Chapter 2.8 --- Chapter Summary --- p.22Chapter 3 --- Background on Shape Representation and Matching --- p.24Chapter 3.1 --- Simple Geometric Features --- p.25Chapter 3.1.1 --- Circularity --- p.25Chapter 3.1.2 --- Rectangularity --- p.26Chapter 3.1.3 --- Hole Area Ratio --- p.27Chapter 3.1.4 --- Horizontal Gap Ratio --- p.27Chapter 3.1.5 --- Vertical Gap Ratio --- p.28Chapter 3.1.6 --- Central Moments --- p.28Chapter 3.1.7 --- Major Axis Orientation --- p.29Chapter 3.1.8 --- Eccentricity --- p.30Chapter 3.2 --- Fourier Descriptors --- p.30Chapter 3.3 --- Chain Codes --- p.31Chapter 3.4 --- Seven Invariant Moments --- p.33Chapter 3.5 --- Zernike Moments --- p.35Chapter 3.6 --- Edge Direction Histogram --- p.36Chapter 3.7 --- Curvature Scale Space Representation --- p.37Chapter 3.8 --- Chapter Summary --- p.39Chapter 4 --- Genetic Algorithm for Weight Assignment --- p.42Chapter 4.1 --- Genetic Algorithm (GA) --- p.42Chapter 4.1.1 --- Basic Idea --- p.43Chapter 4.1.2 --- Genetic Operators --- p.44Chapter 4.2 --- Why GA? --- p.45Chapter 4.3 --- Weight Assignment Problem --- p.46Chapter 4.3.1 --- Integration of Image Attributes --- p.46Chapter 4.4 --- Proposed Solution --- p.47Chapter 4.4.1 --- Formalization --- p.47Chapter 4.4.2 --- Proposed Genetic Algorithm --- p.43Chapter 4.5 --- Chapter Summary --- p.49Chapter 5 --- Shape-based Trademark Image Retrieval System --- p.50Chapter 5.1 --- Problems on Existing Methods --- p.50Chapter 5.1.1 --- Edge Direction Histogram --- p.51Chapter 5.1.2 --- Boundary Based Techniques --- p.52Chapter 5.2 --- Proposed Solution --- p.53Chapter 5.2.1 --- Image Preprocessing --- p.53Chapter 5.2.2 --- Automatic Feature Extraction --- p.54Chapter 5.2.3 --- Approximated Boundary --- p.55Chapter 5.2.4 --- Integration of Shape Features and Query Processing --- p.58Chapter 5.3 --- Experimental Results --- p.58Chapter 5.3.1 --- Experiment 1: Weight Assignment using Genetic Algorithm --- p.59Chapter 5.3.2 --- Experiment 2: Speed on Feature Extraction and Retrieval --- p.62Chapter 5.3.3 --- Experiment 3: Evaluation by Precision --- p.63Chapter 5.3.4 --- Experiment 4: Evaluation by Recall for Deformed Images --- p.64Chapter 5.3.5 --- Experiment 5: Evaluation by Recall for Hand Drawn Query Trademarks --- p.66Chapter 5.3.6 --- "Experiment 6: Evaluation by Recall for Rotated, Scaled and Mirrored Images" --- p.66Chapter 5.3.7 --- Experiment 7: Comparison of Different Integration Methods --- p.68Chapter 5.4 --- Chapter Summary --- p.71Chapter 6 --- Shape-based Chinese Cursive Script Character Image Retrieval System --- p.72Chapter 6.1 --- Comparison to Trademark Retrieval Problem --- p.79Chapter 6.1.1 --- Feature Selection --- p.73Chapter 6.1.2 --- Speed of System --- p.73Chapter 6.1.3 --- Variation of Style --- p.73Chapter 6.2 --- Target of the Research --- p.74Chapter 6.3 --- Proposed Solution --- p.75Chapter 6.3.1 --- Image Preprocessing --- p.75Chapter 6.3.2 --- Automatic Feature Extraction --- p.76Chapter 6.3.3 --- Thinned Image and Linearly Normalized Image --- p.76Chapter 6.3.4 --- Edge Directions --- p.77Chapter 6.3.5 --- Integration of Shape Features --- p.78Chapter 6.4 --- Experimental Results --- p.79Chapter 6.4.1 --- Experiment 8: Weight Assignment using Genetic Algorithm --- p.79Chapter 6.4.2 --- Experiment 9: Speed on Feature Extraction and Retrieval --- p.81Chapter 6.4.3 --- Experiment 10: Evaluation by Recall for Deformed Images --- p.82Chapter 6.4.4 --- Experiment 11: Evaluation by Recall for Rotated and Scaled Images --- p.83Chapter 6.4.5 --- Experiment 12: Comparison of Different Integration Methods --- p.85Chapter 6.5 --- Chapter Summary --- p.87Chapter 7 --- Conclusion --- p.88Chapter 7.1 --- Summary --- p.88Chapter 7.2 --- Future Research --- p.89Chapter 7.2.1 --- Limitations --- p.89Chapter 7.2.2 --- Future Directions --- p.90Chapter A --- A Representative Subset of Trademark Images --- p.91Chapter B --- A Representative Subset of Cursive Script Character Images --- p.93Chapter C --- Shape Feature Extraction Toolbox for Matlab V53 --- p.95Chapter C.l --- central .moment --- p.95Chapter C.2 --- centroid --- p.96Chapter C.3 --- cir --- p.96Chapter C.4 --- ess --- p.97Chapter C.5 --- css_match --- p.100Chapter C.6 --- ecc --- p.102Chapter C.7 --- edgeäždirections --- p.102Chapter C.8 --- fourier-d --- p.105Chapter C.9 --- gen_shape --- p.106Chapter C.10 --- hu7 --- p.108Chapter C.11 --- isclockwise --- p.109Chapter C.12 --- moment --- p.110Chapter C.13 --- normalized-moment --- p.111Chapter C.14 --- orientation --- p.111Chapter C.15 --- resample-pts --- p.112Chapter C.16 --- rectangularity --- p.113Chapter C.17 --- trace-points --- p.114Chapter C.18 --- warp-conv --- p.115Bibliography --- p.11
Trademark image retrieval by local features
The challenge of abstract trademark image retrieval as a test of machine vision algorithms has attracted considerable research interest in the past decade. Current
operational trademark retrieval systems involve manual annotation of the images
(the current âgold standardâ). Accordingly, current systems require a substantial
amount of time and labour to access, and are therefore expensive to operate. This
thesis focuses on the development of algorithms that mimic aspects of human
visual perception in order to retrieve similar abstract trademark images
automatically. A significant category of trademark images are typically highly
stylised, comprising a collection of distinctive graphical elements that often
include geometric shapes. Therefore, in order to compare the similarity of such
images the principal aim of this research has been to develop a method for solving
the partial matching and shape perception problem.
There are few useful techniques for partial shape matching in the context of
trademark retrieval, because those existing techniques tend not to support multicomponent
retrieval. When this work was initiated most trademark image
retrieval systems represented images by means of global features, which are not
suited to solving the partial matching problem. Instead, the author has
investigated the use of local image features as a means to finding similarities
between trademark images that only partially match in terms of their subcomponents.
During the course of this work, it has been established that the
Harris and Chabat detectors could potentially perform sufficiently well to serve as
the basis for local feature extraction in trademark image retrieval. Early findings
in this investigation indicated that the well established SIFT (Scale Invariant
Feature Transform) local features, based on the Harris detector, could potentially
serve as an adequate underlying local representation for matching trademark
images.
There are few researchers who have used mechanisms based on human
perception for trademark image retrieval, implying that the shape representations
utilised in the past to solve this problem do not necessarily reflect the shapes
contained in these image, as characterised by human perception. In response, a
ii
practical approach to trademark image retrieval by perceptual grouping has been
developed based on defining meta-features that are calculated from the spatial
configurations of SIFT local image features. This new technique measures certain
visual properties of the appearance of images containing multiple graphical
elements and supports perceptual grouping by exploiting the non-accidental
properties of their configuration.
Our validation experiments indicated that we were indeed able to capture
and quantify the differences in the global arrangement of sub-components evident
when comparing stylised images in terms of their visual appearance properties.
Such visual appearance properties, measured using 17 of the proposed metafeatures,
include relative sub-component proximity, similarity, rotation and
symmetry. Similar work on meta-features, based on the above Gestalt proximity,
similarity, and simplicity groupings of local features, had not been reported in the
current computer vision literature at the time of undertaking this work.
We decided to adopted relevance feedback to allow the visual appearance
properties of relevant and non-relevant images returned in response to a query to
be determined by example. Since limited training data is available when
constructing a relevance classifier by means of user supplied relevance feedback,
the intrinsically non-parametric machine learning algorithm ID3 (Iterative
Dichotomiser 3) was selected to construct decision trees by means of dynamic
rule induction. We believe that the above approach to capturing high-level visual
concepts, encoded by means of meta-features specified by example through
relevance feedback and decision tree classification, to support flexible trademark
image retrieval and to be wholly novel.
The retrieval performance the above system was compared with two other
state-of-the-art image trademark retrieval systems: Artisan developed by Eakins
(Eakins et al., 1998) and a system developed by Jiang (Jiang et al., 2006). Using
relevance feedback, our system achieves higher average normalised precision
than either of the systems developed by Eakinsâ or Jiang. However, while our
trademark image query and database set is based on an image dataset used by
Eakins, we employed different numbers of images. It was not possible to access to
the same query set and image database used in the evaluation of Jiangâs trademark
iii
image retrieval system evaluation. Despite these differences in evaluation
methodology, our approach would appear to have the potential to improve
retrieval effectiveness
Identifying cell types with single cell sequencing data
Single-cell RNA sequencing (scRNA-seq) techniques, which examine the genetic information of individual cells, provide an unparalleled resolution to discern deeply into cellular heterogeneity. On the contrary, traditional RNA sequencing technologies (bulk RNA sequencing technologies), measure the average RNA expression level of a large number of input cells, which are insufficient for studying heterogeneous systems. Hence, scRNA-seq technologies make it possible to tackle many inaccessible problems, such as rare cell types identification, cancer evolution and cell lineage relationship inference.
Cell population identification is the fundamental of the analysis of scRNA-seq data. Generally, the workflow of scRNA-seq analysis includes data processing, dropout imputation, feature selection, dimensionality reduction, similarity matrix construction and unsupervised clustering. Many single-cell clustering algorithms rely on similarity matrices of cells, but many existing studies have not received the expectant results. There are some unique challenges in analyzing scRNA-seq data sets, including a significant level of biological and technical noise, so similarity matrix construction still deserves further study.
In my study, I present a new method, named Learning Sparse Similarity Matrices (LSSM), to construct cell-cell similarity matrices, and then several clustering methods are used to identify cell populations respectively with scRNA-seq data. Firstly, based on sparse subspace theory, the relationship between a cell and the other cells in the same cell type is expressed by a linear combination. Secondly, I construct a convex optimization objective function to find the similarity matrix, which is consist of the corresponding coefficients of the linear combinations mentioned above. Thirdly, I design an algorithm with column-wise learning and greedy algorithm to solve the objective function. As a result, the large optimization problem on the similarity matrix can be decomposed into a series of smaller optimization problems on the single column of the similarity matrix respectively, and the sparsity of the whole matrix can be ensured by the sparsity of each column. Fourthly, in order to pick an optimal clustering method for identifying cell populations based on the similarity matrix developed by LSSM, I use several clustering methods separately based on the similarity matrix calculated by LSSM from eight scRNA-seq data sets. The clustering results show that my method performs the best when combined with spectral clustering (Laplacian eigenmaps + k-means clustering). In addition, compared with five state-of-the-art methods, my method outperforms most competing methods on eight data sets. Finally, I combine LSSM with t-Distributed Stochastic Neighbor Embedding (t-SNE) to visualize the data points of scRNA-seq data in the two-dimensional space. The results show that for most data points, in the same cell types they are close, while from different cell clusters, they are separated
The Optimisation of Elementary and Integrative Content-Based Image Retrieval Techniques
Image retrieval plays a major role in many image processing applications. However, a number of factors (e.g. rotation, non-uniform illumination, noise and lack of spatial information) can disrupt the outputs of image retrieval systems such that they cannot produce the desired results. In recent years, many researchers have introduced different approaches to overcome this problem. Colour-based CBIR (content-based image retrieval) and shape-based CBIR were the most commonly used techniques for obtaining image signatures. Although the colour histogram and shape descriptor have produced satisfactory results for certain applications, they still suffer many theoretical and practical problems. A prominent one among them is the well-known âcurse of dimensionality â.
In this research, a new Fuzzy Fusion-based Colour and Shape Signature (FFCSS) approach for integrating colour-only and shape-only features has been investigated to produce an effective image feature vector for database retrieval. The proposed technique is based on an optimised fuzzy colour scheme and robust shape descriptors.
Experimental tests were carried out to check the behaviour of the FFCSS-based system, including sensitivity and robustness of the proposed signature of the sampled images, especially under varied conditions of, rotation, scaling, noise and light intensity. To further improve retrieval efficiency of the devised signature model, the target image repositories were clustered into several groups using the k-means clustering algorithm at system runtime, where the search begins at the centres of each cluster. The FFCSS-based approach has proven superior to other benchmarked classic CBIR methods, hence this research makes a substantial contribution towards corresponding theoretical and practical fronts
Holistic interpretation of visual data based on topology:semantic segmentation of architectural facades
The work presented in this dissertation is a step towards effectively incorporating contextual knowledge in the task of semantic segmentation. To date, the use of context has been confined to the genre of the scene with a few exceptions in the field. Research has been directed towards enhancing appearance descriptors. While this is unarguably important, recent studies show that computer vision has reached a near-human level of performance in relying on these descriptors when objects have stable distinctive surface properties and in proper imaging conditions. When these conditions are not met, humans exploit their knowledge about the intrinsic geometric layout of the scene to make local decisions. Computer vision lags behind when it comes to this asset. For this reason, we aim to bridge the gap by presenting algorithms for semantic segmentation of building facades making use of scene topological aspects. We provide a classification scheme to carry out segmentation and recognition simultaneously.The algorithm is able to solve a single optimization function and yield a semantic interpretation of facades, relying on the modeling power of probabilistic graphs and efficient discrete combinatorial optimization tools. We tackle the same problem of semantic facade segmentation with the neural network approach.We attain accuracy figures that are on-par with the state-of-the-art in a fully automated pipeline.Starting from pixelwise classifications obtained via Convolutional Neural Networks (CNN). These are then structurally validated through a cascade of Restricted Boltzmann Machines (RBM) and Multi-Layer Perceptron (MLP) that regenerates the most likely layout. In the domain of architectural modeling, there is geometric multi-model fitting. We introduce a novel guided sampling algorithm based on Minimum Spanning Trees (MST), which surpasses other propagation techniques in terms of robustness to noise. We make a number of additional contributions such as measure of model deviation which captures variations among fitted models
Recommended from our members
Final report for the endowment of simulator agents with human-like episodic memory LDRD.
This report documents work undertaken to endow the cognitive framework currently under development at Sandia National Laboratories with a human-like memory for specific life episodes. Capabilities have been demonstrated within the context of three separate problem areas. The first year of the project developed a capability whereby simulated robots were able to utilize a record of shared experience to perform surveillance of a building to detect a source of smoke. The second year focused on simulations of social interactions providing a queriable record of interactions such that a time series of events could be constructed and reconstructed. The third year addressed tools to promote desktop productivity, creating a capability to query episodic logs in real time allowing the model of a user to build on itself based on observations of the user's behavior
Simple identification tools in FishBase
Simple identification tools for fish species were included in the FishBase information system from its inception. Early tools made use of the relational model and characters like fin ray meristics. Soon pictures and drawings were added as a further help, similar to a field guide. Later came the computerization of existing dichotomous keys, again in combination with pictures and other information, and the ability to restrict possible species by country, area, or taxonomic group. Today, www.FishBase.org offers four different ways to identify species. This paper describes these tools with their advantages and disadvantages, and suggests various options for further
development. It explores the possibility of a holistic and integrated computeraided strategy
Data Mining Using the Crossing Minimization Paradigm
Our ability and capacity to generate, record and store multi-dimensional, apparently
unstructured data is increasing rapidly, while the cost of data storage is going down. The data recorded is not perfect, as noise gets introduced in it from different sources. Some of the basic forms of noise are incorrect recording of values and missing values. The formal study of discovering useful hidden information in the data is called Data Mining.
Because of the size, and complexity of the problem, practical data mining problems are
best attempted using automatic means.
Data Mining can be categorized into two types i.e. supervised learning or classification and unsupervised learning or clustering. Clustering only the records in a database (or data matrix) gives a global view of the data and is called one-way clustering. For a detailed analysis or a local view, biclustering or co-clustering or two-way clustering is required involving the simultaneous clustering of the records and the attributes.
In this dissertation, a novel fast and white noise tolerant data mining solution is
proposed based on the Crossing Minimization (CM) paradigm; the solution works for
one-way as well as two-way clustering for discovering overlapping biclusters. For
decades the CM paradigm has traditionally been used for graph drawing and VLSI
(Very Large Scale Integration) circuit design for reducing wire length and congestion. The utility of the proposed technique is demonstrated by comparing it with other biclustering techniques using simulated noisy, as well as real data from Agriculture, Biology and other domains.
Two other interesting and hard problems also addressed in this dissertation are (i) the
Minimum Attribute Subset Selection (MASS) problem and (ii) Bandwidth
Minimization (BWM) problem of sparse matrices. The proposed CM technique is
demonstrated to provide very convincing results while attempting to solve the said
problems using real public domain data.
Pakistan is the fourth largest supplier of cotton in the world. An apparent anomaly has
been observed during 1989-97 between cotton yield and pesticide consumption in
Pakistan showing unexpected periods of negative correlation. By applying the
indigenous CM technique for one-way clustering to real Agro-Met data (2001-2002), a possible explanation of the anomaly has been presented in this thesis
- âŠ