712 research outputs found
Feature Lines for Illustrating Medical Surface Models: Mathematical Background and Survey
This paper provides a tutorial and survey for a specific kind of illustrative
visualization technique: feature lines. We examine different feature line
methods. For this, we provide the differential geometry behind these concepts
and adapt this mathematical field to the discrete differential geometry. All
discrete differential geometry terms are explained for triangulated surface
meshes. These utilities serve as basis for the feature line methods. We provide
the reader with all knowledge to re-implement every feature line method.
Furthermore, we summarize the methods and suggest a guideline for which kind of
surface which feature line algorithm is best suited. Our work is motivated by,
but not restricted to, medical and biological surface models.Comment: 33 page
Image Feature Information Extraction for Interest Point Detection: A Comprehensive Review
Interest point detection is one of the most fundamental and critical problems
in computer vision and image processing. In this paper, we carry out a
comprehensive review on image feature information (IFI) extraction techniques
for interest point detection. To systematically introduce how the existing
interest point detection methods extract IFI from an input image, we propose a
taxonomy of the IFI extraction techniques for interest point detection.
According to this taxonomy, we discuss different types of IFI extraction
techniques for interest point detection. Furthermore, we identify the main
unresolved issues related to the existing IFI extraction techniques for
interest point detection and any interest point detection methods that have not
been discussed before. The existing popular datasets and evaluation standards
are provided and the performances for eighteen state-of-the-art approaches are
evaluated and discussed. Moreover, future research directions on IFI extraction
techniques for interest point detection are elaborated
Large scale evaluation of local image feature detectors on homography datasets
We present a large scale benchmark for the evaluation of local feature
detectors. Our key innovation is the introduction of a new evaluation protocol
which extends and improves the standard detection repeatability measure. The
new protocol is better for assessment on a large number of images and reduces
the dependency of the results on unwanted distractors such as the number of
detected features and the feature magnification factor. Additionally, our
protocol provides a comprehensive assessment of the expected performance of
detectors under several practical scenarios. Using images from the
recently-introduced HPatches dataset, we evaluate a range of state-of-the-art
local feature detectors on two main tasks: viewpoint and illumination invariant
detection. Contrary to previous detector evaluations, our study contains an
order of magnitude more image sequences, resulting in a quantitative evaluation
significantly more robust to over-fitting. We also show that traditional
detectors are still very competitive when compared to recent deep-learning
alternatives.Comment: Accepted to BMVC 201
Taking the bite out of automated naming of characters in TV video
We investigate the problem of automatically labelling appearances of characters in TV or film material
with their names. This is tremendously challenging due to the huge variation in imaged appearance of each character and the weakness and ambiguity of available annotation. However, we demonstrate that high precision can be achieved by combining multiple sources of information, both visual and textual. The principal novelties that we introduce are: (i) automatic generation of time stamped character annotation by aligning subtitles and transcripts; (ii) strengthening the supervisory information by identifying
when characters are speaking. In addition, we incorporate complementary cues of face matching and clothing matching to propose common annotations for face tracks, and consider choices of classifier which can potentially correct errors made in the automatic extraction of training data from the weak textual annotation. Results are presented on episodes of the TV series ââBuffy the Vampire Slayerâ
Numerical simulations of string networks in the Abelian-Higgs model
We present the results of a field theory simulation of networks of strings in
the Abelian Higgs model. Starting from a random initial configuration we show
that the resulting vortex tangle approaches a self-similar regime in which the
length density of lines of zeros of reduces as . We demonstrate
that the network loses energy directly into scalar and gauge radiation. These
results support a recent claim that particle production, and not gravitational
radiation, is the dominant energy loss mechanism for cosmic strings. This means
that cosmic strings in Grand Unified Theories are severely constrained by high
energy cosmic ray fluxes: either they are ruled out, or an implausibly small
fraction of their energy ends up in quarks and leptons.Comment: 4pp RevTeX, 3 eps figures, clarifications and new results included,
to be published in Phys. Rev. Let
Contributions to the Completeness and Complementarity of Local Image Features
Tese de doutoramento em Engenharia InformĂĄtica apresentada Ă Faculdade de CiĂȘncias e Tecnologia da Universidade de CoimbraLocal image feature detection (or extraction, if we want to use a more semantically correct term) is a central and extremely active research topic in the field of computer vision. Reliable solutions to prominent problems such as matching, content-based image retrieval, object (class) recognition, and symmetry detection, often make use of local image features.
It is widely accepted that a good local feature detector is the one that efficiently retrieves distinctive, accurate, and repeatable features in the presence of a wide variety of photometric and geometric transformations. However, these requirements are not always the most important. In fact, not all the applications require the same properties from a local feature detector. We can distinguish three broad categories of applications according to the required properties. The first category includes applications in which the semantic meaning of a particular type of features is exploited. For instance, edge or even ridge detection can be used to identify blood vessels in medical images or watercourses in aerial images. Another example in this category is the use of blob extraction to identify blob-like organisms in microscopic images. A second category includes tasks such as matching, tracking, and registration, which mainly require distinctive, repeatable, and accurate features.
Finally, a third category comprises applications such as object (class) recognition, image retrieval, scene classification, and image compression. For this category, it is crucial that features preserve the most informative image content (robust image representation), while requirements such as repeatability and accuracy are of less importance.
Our research work is mainly focused on the problem of providing a robust image representation through the use of local features. The limited number of types of features that a local feature extractor responds to might be insufficient to provide the so-called robust image representation. It is fundamental to analyze the completeness of local features, i.e., the amount of image information preserved by local features, as well as the often neglected complementarity between sets of features.
The major contributions of this work come in the form of two substantially different local feature detectors aimed at providing considerably robust image representations. The first algorithm is an information theoretic-based keypoint extraction that responds to complementary local structures that are salient (highly informative) within the image context. This method represents a new paradigm in local feature extraction, as it introduces context-awareness principles. The second algorithm extracts Stable Salient Shapes, a novel type of regions, which are obtained through a feature-driven detection of Maximally Stable Extremal Regions (MSER). This method provides compact and robust image representations and overcomes some of the major shortcomings of MSER detection.
We empirically validate the methods by investigating the repeatability, accuracy, completeness, and complementarity of the proposed features on standard benchmarks. Under these results, we discuss the applicability of both methods
Graph-based Spatial Motion Tracking Using Affine-covariant Regions
This thesis considers the task of spatial motion reconstruction from image sequences using a stereoscopic camera setup. In a variety of fields, such as flow analysis in physics or the measurement of oscillation characteristics and damping behavior in mechanical engineering, efficient and accurate methods for motion analysis are of great importance. This work discusses each algorithmic step of the motion reconstruction problem using a set of freely available image sequences. The presented concepts and evaluation results are of a generic nature and may thus be applied to a multitude of applications in various fields, where motion can be observed by two calibrated cameras. The first step in the processing chain of a motion reconstruction algorithm is concerned with the automated detection of salient locations (=features or regions) within each image of a given sequence. In this thesis, detection is directly performed on the natural texture of the observed objects instead of using artificial marker elements (as with many currently available methods). As one of the major contributions of this work, five well-known detection methods from the contemporary literature are compared to each other with regard to several performance measures, such as localization accuracy or the robustness under perspective distortions. The given results extend the available literature on the topic and facilitate the well-founded selection of appropriate detectors according to the requirements of specific target applications. In the second step, both spatial and temporal correspondences have to be established between features extracted from different images. With the former, two images taken at the same time instant but with different cameras are considered (stereo reconstruction) while with the latter, correspondences are sought between temporally adjacent images from the same camera instead (monocular feature tracking). With most classical methods, an observed object is either spatially reconstructed at a single time instant yielding a set of three-dimensional coordinates, or its motion is analyzed separately within each camera yielding a set of two-dimensional trajectories. A major contribution of this thesis is a concept for the unification of both stereo reconstruction and monocular tracking. Based on sets of two-dimensional trajectories from each camera of a stereo setup, the proposed method uses a graph-based approach to find correspondences not between single features but between entire trajectories instead. Thereby, the influence of locally ambiguous correspondences is mitigated significantly. The resulting spatial trajectories contain both the three-dimensional structure and the motion of the observed objects at the same time. To the best knowledge of the author, a similar concept does not yet exist in the literature. In a detailed evaluation, the superiority of the new method is demonstrated
A method to improve interest point detection and its GPU implementation
Interest point detection is an important low-level image processing technique with a wide range of applications. The point detectors have to be robust under affine, scale and photometric changes. There are many scale and affine invariant point detectors but they are not robust to high illumination changes. Many affine invariant interest point detectors and region descriptors, work on the points detected using scale invariant operators. Since the performance of those detectors depends on the performance of the scale invariant detectors, it is important that the scale invariant initial stage detectors should have good robustness. It is therefore important to design a detector that is very robust to illumination because illumination changes are the most common. In this research the illumination problem has been taken as the main focus and have developed a scale invariant detector that has good robustness to illumination changes. In the paper [6] it has been proved that by using contrast stretching technique the performance of the Harris operator improved considerably for illumination variations. In this research the same contrast stretching function has been incorporated into two different scale invariant operators to make them illumination invariant. The performances of the algorithms are compared with the Harris-Laplace and Hessian-Laplace algorithms [15]
Characteristic Evolution and Matching
I review the development of numerical evolution codes for general relativity
based upon the characteristic initial value problem. Progress in characteristic
evolution is traced from the early stage of 1D feasibility studies to 2D
axisymmetric codes that accurately simulate the oscillations and gravitational
collapse of relativistic stars and to current 3D codes that provide pieces of a
binary black hole spacetime. Cauchy codes have now been successful at
simulating all aspects of the binary black hole problem inside an artificially
constructed outer boundary. A prime application of characteristic evolution is
to extend such simulations to null infinity where the waveform from the binary
inspiral and merger can be unambiguously computed. This has now been
accomplished by Cauchy-characteristic extraction, where data for the
characteristic evolution is supplied by Cauchy data on an extraction worldtube
inside the artificial outer boundary. The ultimate application of
characteristic evolution is to eliminate the role of this outer boundary by
constructing a global solution via Cauchy-characteristic matching. Progress in
this direction is discussed.Comment: New version to appear in Living Reviews 2012. arXiv admin note:
updated version of arXiv:gr-qc/050809
- âŠ