124 research outputs found
Trademark image retrieval by local features
The challenge of abstract trademark image retrieval as a test of machine vision algorithms has attracted considerable research interest in the past decade. Current
operational trademark retrieval systems involve manual annotation of the images
(the current ‘gold standard’). Accordingly, current systems require a substantial
amount of time and labour to access, and are therefore expensive to operate. This
thesis focuses on the development of algorithms that mimic aspects of human
visual perception in order to retrieve similar abstract trademark images
automatically. A significant category of trademark images are typically highly
stylised, comprising a collection of distinctive graphical elements that often
include geometric shapes. Therefore, in order to compare the similarity of such
images the principal aim of this research has been to develop a method for solving
the partial matching and shape perception problem.
There are few useful techniques for partial shape matching in the context of
trademark retrieval, because those existing techniques tend not to support multicomponent
retrieval. When this work was initiated most trademark image
retrieval systems represented images by means of global features, which are not
suited to solving the partial matching problem. Instead, the author has
investigated the use of local image features as a means to finding similarities
between trademark images that only partially match in terms of their subcomponents.
During the course of this work, it has been established that the
Harris and Chabat detectors could potentially perform sufficiently well to serve as
the basis for local feature extraction in trademark image retrieval. Early findings
in this investigation indicated that the well established SIFT (Scale Invariant
Feature Transform) local features, based on the Harris detector, could potentially
serve as an adequate underlying local representation for matching trademark
images.
There are few researchers who have used mechanisms based on human
perception for trademark image retrieval, implying that the shape representations
utilised in the past to solve this problem do not necessarily reflect the shapes
contained in these image, as characterised by human perception. In response, a
ii
practical approach to trademark image retrieval by perceptual grouping has been
developed based on defining meta-features that are calculated from the spatial
configurations of SIFT local image features. This new technique measures certain
visual properties of the appearance of images containing multiple graphical
elements and supports perceptual grouping by exploiting the non-accidental
properties of their configuration.
Our validation experiments indicated that we were indeed able to capture
and quantify the differences in the global arrangement of sub-components evident
when comparing stylised images in terms of their visual appearance properties.
Such visual appearance properties, measured using 17 of the proposed metafeatures,
include relative sub-component proximity, similarity, rotation and
symmetry. Similar work on meta-features, based on the above Gestalt proximity,
similarity, and simplicity groupings of local features, had not been reported in the
current computer vision literature at the time of undertaking this work.
We decided to adopted relevance feedback to allow the visual appearance
properties of relevant and non-relevant images returned in response to a query to
be determined by example. Since limited training data is available when
constructing a relevance classifier by means of user supplied relevance feedback,
the intrinsically non-parametric machine learning algorithm ID3 (Iterative
Dichotomiser 3) was selected to construct decision trees by means of dynamic
rule induction. We believe that the above approach to capturing high-level visual
concepts, encoded by means of meta-features specified by example through
relevance feedback and decision tree classification, to support flexible trademark
image retrieval and to be wholly novel.
The retrieval performance the above system was compared with two other
state-of-the-art image trademark retrieval systems: Artisan developed by Eakins
(Eakins et al., 1998) and a system developed by Jiang (Jiang et al., 2006). Using
relevance feedback, our system achieves higher average normalised precision
than either of the systems developed by Eakins’ or Jiang. However, while our
trademark image query and database set is based on an image dataset used by
Eakins, we employed different numbers of images. It was not possible to access to
the same query set and image database used in the evaluation of Jiang’s trademark
iii
image retrieval system evaluation. Despite these differences in evaluation
methodology, our approach would appear to have the potential to improve
retrieval effectiveness
Automatic registration of multi-modal airborne imagery
This dissertation presents a novel technique based on Maximization of Mutual Information (MMI) and multi-resolution to design an algorithm for automatic registration of multi-sensor images captured by various airborne cameras. In contrast to conventional methods that extract and employ feature points, MMI-based algorithms utilize the mutual information found between two given images to compute the registration parameters. These, in turn, are then utilized to perform multi-sensor registration for remote sensing images. The results indicate that the proposed algorithms are very effective in registering infrared images taken at three different wavelengths with a high resolution visual image of a given scene. The MMI technique has proven to be very robust with images acquired with the Wild Airborne Sensor Program (WASP) multi-sensor instrument. This dissertation also shows how wavelet based techniques can be used in a multi-resolution analysis framework to significantly increase computational efficiency for images captured at different resolutions. The fundamental result of this thesis is the technique of using features in the images to enhance the robustness, accuracy and speed of MMI registration. This is done by using features to focus MMI on places that are rich in information. The new algorithm smoothly integrates with MMI and avoids any need for feature-matching, and then the applications of such extensions are studied. The first extension is the registration of cartographic maps and image datum, which is very important for map updating and change detection. This is a difficult problem because map features such as roads and buildings may be mis-located and features extracted from images may not correspond to map features. Nonetheless, it is possible to obtain a general global registration of maps and images by applying statistical techniques to map and image features. To solve the map-to-image registration problem this research extends the MMI technique through a focus-of-attention mechanism that forces MMI to utilize correspondences that have a high probability of being information rich. The gradient-based parameter search and exhaustive parameter search methods are also compared. Both qualitative and quantitative analysis are used to assess the registration accuracy. Another difficult application is the fusion of the LIDAR elevation or intensity data with imagery. Such applications are even more challenging when automated registrations algorithms are needed. To improve the registration robustness, a salient area extraction algorithm is developed to overcome the distortion in the airborne and satellite images from different sensors. This extension combines the SIFT and Harris feature detection algorithms with MMI and the Harris corner label map to address difficult multi-modal registration problems through a combination of selection and focus-of-attention mechanisms together with mutual information. This two-step approach overcomes the above problems and provides a good initialization for the final step of the registration process. Experimental results are provided that demonstrate a variety of mapping applications including multi-modal IR imagery, map and image registration and image and LIDAR registration
Investigation of Computer Vision Concepts and Methods for Structural Health Monitoring and Identification Applications
This study presents a comprehensive investigation of methods and technologies for developing a computer vision-based framework for Structural Health Monitoring (SHM) and Structural Identification (St-Id) for civil infrastructure systems, with particular emphasis on various types of bridges. SHM is implemented on various structures over the last two decades, yet, there are some issues such as considerable cost, field implementation time and excessive labor needs for the instrumentation of sensors, cable wiring work and possible interruptions during implementation. These issues make it only viable when major investments for SHM are warranted for decision making. For other cases, there needs to be a practical and effective solution, which computer-vision based framework can be a viable alternative. Computer vision based SHM has been explored over the last decade. Unlike most of the vision-based structural identification studies and practices, which focus either on structural input (vehicle location) estimation or on structural output (structural displacement and strain responses) estimation, the proposed framework combines the vision-based structural input and the structural output from non-contact sensors to overcome the limitations given above. First, this study develops a series of computer vision-based displacement measurement methods for structural response (structural output) monitoring which can be applied to different infrastructures such as grandstands, stadiums, towers, footbridges, small/medium span concrete bridges, railway bridges, and long span bridges, and under different loading cases such as human crowd, pedestrians, wind, vehicle, etc. Structural behavior, modal properties, load carrying capacities, structural serviceability and performance are investigated using vision-based methods and validated by comparing with conventional SHM approaches. In this study, some of the most famous landmark structures such as long span bridges are utilized as case studies. This study also investigated the serviceability status of structures by using computer vision-based methods. Subsequently, issues and considerations for computer vision-based measurement in field application are discussed and recommendations are provided for better results. This study also proposes a robust vision-based method for displacement measurement using spatio-temporal context learning and Taylor approximation to overcome the difficulties of vision-based monitoring under adverse environmental factors such as fog and illumination change. In addition, it is shown that the external load distribution on structures (structural input) can be estimated by using visual tracking, and afterward load rating of a bridge can be determined by using the load distribution factors extracted from computer vision-based methods. By combining the structural input and output results, the unit influence line (UIL) of structures are extracted during daily traffic just using cameras from which the external loads can be estimated by using just cameras and extracted UIL. Finally, the condition assessment at global structural level can be achieved using the structural input and output, both obtained from computer vision approaches, would give a normalized response irrespective of the type and/or load configurations of the vehicles or human loads
Point Cloud Registration for LiDAR and Photogrammetric Data: a Critical Synthesis and Performance Analysis on Classic and Deep Learning Algorithms
Recent advances in computer vision and deep learning have shown promising
performance in estimating rigid/similarity transformation between unregistered
point clouds of complex objects and scenes. However, their performances are
mostly evaluated using a limited number of datasets from a single sensor (e.g.
Kinect or RealSense cameras), lacking a comprehensive overview of their
applicability in photogrammetric 3D mapping scenarios. In this work, we provide
a comprehensive review of the state-of-the-art (SOTA) point cloud registration
methods, where we analyze and evaluate these methods using a diverse set of
point cloud data from indoor to satellite sources. The quantitative analysis
allows for exploring the strengths, applicability, challenges, and future
trends of these methods. In contrast to existing analysis works that introduce
point cloud registration as a holistic process, our experimental analysis is
based on its inherent two-step process to better comprehend these approaches
including feature/keypoint-based initial coarse registration and dense fine
registration through cloud-to-cloud (C2C) optimization. More than ten methods,
including classic hand-crafted, deep-learning-based feature correspondence, and
robust C2C methods were tested. We observed that the success rate of most of
the algorithms are fewer than 40% over the datasets we tested and there are
still are large margin of improvement upon existing algorithms concerning 3D
sparse corresopondence search, and the ability to register point clouds with
complex geometry and occlusions. With the evaluated statistics on three
datasets, we conclude the best-performing methods for each step and provide our
recommendations, and outlook future efforts.Comment: 7 figure
Vision-based retargeting for endoscopic navigation
Endoscopy is a standard procedure for visualising the human gastrointestinal tract. With the advances in biophotonics, imaging techniques such as narrow band imaging, confocal laser endomicroscopy, and optical coherence tomography can be combined with normal endoscopy for assisting the early diagnosis of diseases, such as cancer. In the past decade, optical biopsy has emerged to be an effective tool for tissue analysis, allowing in vivo and in situ assessment of pathological sites with real-time feature-enhanced microscopic images. However, the non-invasive nature of optical biopsy leads to an intra-examination retargeting problem, which is associated with the difficulty of re-localising a biopsied site consistently throughout the whole examination. In addition to intra-examination retargeting, retargeting of a pathological site is even more challenging across examinations, due to tissue deformation and changing tissue morphologies and appearances. The purpose of this thesis is to address both the intra- and inter-examination retargeting problems associated with optical biopsy. We propose a novel vision-based framework for intra-examination retargeting. The proposed framework is based on combining visual tracking and detection with online learning of the appearance of the biopsied site. Furthermore, a novel cascaded detection approach based on random forests and structured support vector machines is developed to achieve efficient retargeting. To cater for reliable inter-examination retargeting, the solution provided in this thesis is achieved by solving an image retrieval problem, for which an online scene association approach is proposed to summarise an endoscopic video collected in the first examination into distinctive scenes. A hashing-based approach is then used to learn the intrinsic representations of these scenes, such that retargeting can be achieved in subsequent examinations by retrieving the relevant images using the learnt representations. For performance evaluation of the proposed frameworks, extensive phantom, ex vivo and in vivo experiments have been conducted, with results demonstrating the robustness and potential clinical values of the methods proposed.Open Acces
Robust and Optimal Methods for Geometric Sensor Data Alignment
Geometric sensor data alignment - the problem of finding the
rigid transformation that correctly aligns two sets of sensor
data without prior knowledge of how the data correspond - is a
fundamental task in computer vision and robotics. It is
inconvenient then that outliers and non-convexity are inherent to
the problem and present significant challenges for alignment
algorithms. Outliers are highly prevalent in sets of sensor data,
particularly when the sets overlap incompletely. Despite this,
many alignment objective functions are not robust to outliers,
leading to erroneous alignments. In addition, alignment problems
are highly non-convex, a property arising from the objective
function and the transformation. While finding a local optimum
may not be difficult, finding the global optimum is a hard
optimisation problem. These key challenges have not been fully
and jointly resolved in the existing literature, and so there is
a need for robust and optimal solutions to alignment problems.
Hence the objective of this thesis is to develop tractable
algorithms for geometric sensor data alignment that are robust to
outliers and not susceptible to spurious local optima.
This thesis makes several significant contributions to the
geometric alignment literature, founded on new insights into
robust alignment and the geometry of transformations. Firstly, a
novel discriminative sensor data representation is proposed that
has better viewpoint invariance than generative models and is
time and memory efficient without sacrificing model fidelity.
Secondly, a novel local optimisation algorithm is developed for
nD-nD geometric alignment under a robust distance measure. It
manifests a wider region of convergence and a greater robustness
to outliers and sampling artefacts than other local optimisation
algorithms. Thirdly, the first optimal solution for 3D-3D
geometric alignment with an inherently robust objective function
is proposed. It outperforms other geometric alignment algorithms
on challenging datasets due to its guaranteed optimality and
outlier robustness, and has an efficient parallel implementation.
Fourthly, the first optimal solution for 2D-3D geometric
alignment with an inherently robust objective function is
proposed. It outperforms existing approaches on challenging
datasets, reliably finding the global optimum, and has an
efficient parallel implementation. Finally, another optimal
solution is developed for 2D-3D geometric alignment, using a
robust surface alignment measure.
Ultimately, robust and optimal methods, such as those in this
thesis, are necessary to reliably find accurate solutions to
geometric sensor data alignment problems
- …