729 research outputs found
Convolutional neural network architecture for geometric matching
We address the problem of determining correspondences between two images in
agreement with a geometric model such as an affine or thin-plate spline
transformation, and estimating its parameters. The contributions of this work
are three-fold. First, we propose a convolutional neural network architecture
for geometric matching. The architecture is based on three main components that
mimic the standard steps of feature extraction, matching and simultaneous
inlier detection and model parameter estimation, while being trainable
end-to-end. Second, we demonstrate that the network parameters can be trained
from synthetically generated imagery without the need for manual annotation and
that our matching layer significantly increases generalization capabilities to
never seen before images. Finally, we show that the same model can perform both
instance-level and category-level matching giving state-of-the-art results on
the challenging Proposal Flow dataset.Comment: In 2017 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR 2017
Multimodal Three Dimensional Scene Reconstruction, The Gaussian Fields Framework
The focus of this research is on building 3D representations of real world scenes and objects using different imaging sensors. Primarily range acquisition devices (such as laser scanners and stereo systems) that allow the recovery of 3D geometry, and multi-spectral image sequences including visual and thermal IR images that provide additional scene characteristics. The crucial technical challenge that we addressed is the automatic point-sets registration task. In this context our main contribution is the development of an optimization-based method at the core of which lies a unified criterion that solves simultaneously for the dense point correspondence and transformation recovery problems. The new criterion has a straightforward expression in terms of the datasets and the alignment parameters and was used primarily for 3D rigid registration of point-sets. However it proved also useful for feature-based multimodal image alignment. We derived our method from simple Boolean matching principles by approximation and relaxation. One of the main advantages of the proposed approach, as compared to the widely used class of Iterative Closest Point (ICP) algorithms, is convexity in the neighborhood of the registration parameters and continuous differentiability, allowing for the use of standard gradient-based optimization techniques. Physically the criterion is interpreted in terms of a Gaussian Force Field exerted by one point-set on the other. Such formulation proved useful for controlling and increasing the region of convergence, and hence allowing for more autonomy in correspondence tasks. Furthermore, the criterion can be computed with linear complexity using recently developed Fast Gauss Transform numerical techniques. In addition, we also introduced a new local feature descriptor that was derived from visual saliency principles and which enhanced significantly the performance of the registration algorithm. The resulting technique was subjected to a thorough experimental analysis that highlighted its strength and showed its limitations. Our current applications are in the field of 3D modeling for inspection, surveillance, and biometrics. However, since this matching framework can be applied to any type of data, that can be represented as N-dimensional point-sets, the scope of the method is shown to reach many more pattern analysis applications
Disparate View Matching
Matching of disparate views has gained significance in computer vision due to its role in many novel application areas. Being able to match images of the same scene captured during day and night, between a historic and contemporary picture of a scene, and between aerial and ground-level views of a building facade all enable novel applications ranging from loop-closure detection for structure-from-motion and re-photography to geo-localization of a street-level image using reference imagery captured from the air. The goal of this work is to develop novel features and methods that address matching problems where direct appearance-based correspondences are either difficult to obtain or infeasible because of the lack of appearance similarity altogether. To address these problems, we propose methods that span the appearance-geometry spectrum in terms of both the use of these cues as well as the ability of each method to handle variations in appearance and geometry. First, we consider the problem of geo-localization of a query street-level image using a reference database of building facades captured from a bird\u27s eye view. To address this wide-baseline facade matching problem, a novel scale-selective self-similarity feature that avoids direct comparison of appearance between disparate facade images is presented. Next, to address image matching problems with more extreme appearance variation, a novel representation for matchable images expressed in terms of the eigen-functions of the joint graph of the two images is presented. This representation is used to derive features that are persistent across wide variations in appearance. Next, the problem setting of matching between a street-level image and a digital elevation map (DEM) is considered. Given the limited appearance information available in this scenario, the matching approach has to rely more significantly on geometric cues. Therefore, a purely geometric method to establish correspondences between building corners in the DEM and the visible corners in the query image is presented. Finally, to generalize this problem setting we address the problem of establishing correspondences between 3D and 2D point clouds using geometric means alone. A novel framework for incorporating purely geometric constraints into a higher-order graph matching framework is presented with specific formulations for the three-point calibrated absolute camera pose problem (P3P), two-point upright camera pose problem (Up2p) and the three-plus-one relative camera pose problem
Coronary Artery Segmentation and Motion Modelling
Conventional coronary artery bypass surgery requires invasive sternotomy and the
use of a cardiopulmonary bypass, which leads to long recovery period and has high
infectious potential. Totally endoscopic coronary artery bypass (TECAB) surgery
based on image guided robotic surgical approaches have been developed to allow the
clinicians to conduct the bypass surgery off-pump with only three pin holes incisions
in the chest cavity, through which two robotic arms and one stereo endoscopic camera
are inserted. However, the restricted field of view of the stereo endoscopic images leads
to possible vessel misidentification and coronary artery mis-localization. This results
in 20-30% conversion rates from TECAB surgery to the conventional approach.
We have constructed patient-specific 3D + time coronary artery and left ventricle
motion models from preoperative 4D Computed Tomography Angiography (CTA)
scans. Through temporally and spatially aligning this model with the intraoperative
endoscopic views of the patient's beating heart, this work assists the surgeon to identify
and locate the correct coronaries during the TECAB precedures. Thus this work has
the prospect of reducing the conversion rate from TECAB to conventional coronary
bypass procedures.
This thesis mainly focus on designing segmentation and motion tracking methods
of the coronary arteries in order to build pre-operative patient-specific motion models.
Various vessel centreline extraction and lumen segmentation algorithms are presented,
including intensity based approaches, geometric model matching method and
morphology-based method. A probabilistic atlas of the coronary arteries is formed
from a group of subjects to facilitate the vascular segmentation and registration procedures.
Non-rigid registration framework based on a free-form deformation model
and multi-level multi-channel large deformation diffeomorphic metric mapping are
proposed to track the coronary motion. The methods are applied to 4D CTA images
acquired from various groups of patients and quantitatively evaluated
Robust arbitrary-view gait recognition based on 3D partial similarity matching
Existing view-invariant gait recognition methods encounter difficulties due to limited number of available gait views and varying conditions during training. This paper proposes gait partial similarity matching that assumes a 3-dimensional (3D) object shares common view surfaces in significantly different views. Detecting such surfaces aids the extraction of gait features from multiple views. 3D parametric body models are morphed by pose and shape deformation from a template model using 2-dimensional (2D) gait silhouette as observation. The gait pose is estimated by a level set energy cost function from silhouettes including incomplete ones. Body shape deformation is achieved via Laplacian deformation energy function associated with inpainting gait silhouettes. Partial gait silhouettes in different views are extracted by gait partial region of interest elements selection and re-projected onto 2D space to construct partial gait energy images. A synthetic database with destination views and multi-linear subspace classifier fused with majority voting are used to achieve arbitrary view gait recognition that is robust to varying conditions. Experimental results on CMU, CASIA B, TUM-IITKGP, AVAMVG and KY4D datasets show the efficacy of the propose method
Efficient Human Activity Recognition in Large Image and Video Databases
Vision-based human action recognition has attracted considerable interest in recent research for its applications to video surveillance, content-based search, healthcare, and interactive games. Most existing research deals with building informative feature descriptors, designing efficient and robust algorithms, proposing versatile and challenging datasets, and fusing multiple modalities. Often, these approaches build on certain conventions such as the use of motion cues to determine video descriptors, application of off-the-shelf classifiers, and single-factor classification of videos. In this thesis, we deal with important but overlooked issues such as efficiency, simplicity, and scalability of human activity recognition in different application scenarios: controlled video environment (e.g.~indoor surveillance), unconstrained videos (e.g.~YouTube), depth or skeletal data (e.g.~captured by Kinect), and person images (e.g.~Flicker). In particular, we are interested in answering questions like (a) is it possible to efficiently recognize human actions in controlled videos without temporal cues? (b) given that the large-scale unconstrained video data are often of high dimension low sample size (HDLSS) nature, how to efficiently recognize human actions in such data? (c) considering the rich 3D motion information available from depth or motion capture sensors, is it possible to recognize both the actions and the actors using only the motion dynamics of underlying activities? and (d) can motion information from monocular videos be used for automatically determining saliency regions for recognizing actions in still images
Robust surface modelling of visual hull from multiple silhouettes
Reconstructing depth information from images is one of the actively researched themes
in computer vision and its application involves most vision research areas from object
recognition to realistic visualisation. Amongst other useful vision-based reconstruction
techniques, this thesis extensively investigates the visual hull (VH) concept for volume
approximation and its robust surface modelling when various views of an object are
available. Assuming that multiple images are captured from a circular motion, projection
matrices are generally parameterised in terms of a rotation angle from a reference position
in order to facilitate the multi-camera calibration. However, this assumption is often
violated in practice, i.e., a pure rotation in a planar motion with accurate rotation angle
is hardly realisable. To address this problem, at first, this thesis proposes a calibration
method associated with the approximate circular motion.
With these modified projection matrices, a resulting VH is represented by a hierarchical
tree structure of voxels from which surfaces are extracted by the Marching
cubes (MC) algorithm. However, the surfaces may have unexpected artefacts caused by
a coarser volume reconstruction, the topological ambiguity of the MC algorithm, and
imperfect image processing or calibration result. To avoid this sensitivity, this thesis
proposes a robust surface construction algorithm which initially classifies local convex
regions from imperfect MC vertices and then aggregates local surfaces constructed by the
3D convex hull algorithm. Furthermore, this thesis also explores the use of wide baseline
images to refine a coarse VH using an affine invariant region descriptor. This improves
the quality of VH when a small number of initial views is given.
In conclusion, the proposed methods achieve a 3D model with enhanced accuracy.
Also, robust surface modelling is retained when silhouette images are degraded by
practical noise
- …